DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
The amendment filed 9/15/2025 has been entered. Claims 1, 3, 4, 9, and 10 are amended. Claim 2 is cancelled. Claims 11 and 12 are newly added. Claims 1 and 3-12 are pending in the application. Applicant’s amendments to the drawings and claims have overcome each and every objection and 101 rejection set forth in the Non-Final Office Action mailed 5/14/2025.
Applicant’s arguments, see pages 11-12, with respect to von Drigalski not teaching the amended feature “acquire work related information relating to a work of a robot generated during an execution of the sequence of subtasks” have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Drigalski (US 20230330854 A1) and Barajas (US 20130245824 A1),
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-6, and 8-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over von Drigalski (US 20230330854 A1) in view of Barajas (US 20130245824 A1).
Regarding Claim 1,
von Drigalski teaches
An information collecting device comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: (“As illustrated in FIG. 2, the movement planning device 1 according to the present embodiment is a computer … The control part 11 includes a central processing part (CPU), … and is configured to be able to execute information processing based on programs and various data. The storage part 12 is an example of a memory, … In the present embodiment, the storage part 12 stores various information such as a movement planning program 81. The movement planning program 81 is a program for causing the movement planning device 1 to execute information processing (FIGS. 5 and 9) regarding generation of a movement plan, which will be described later.” See at least [0048-0050] and fig. 2)
control a robot based on a sequence of subtasks obtained by decomposing an objective task to be executed by the robot; (“Thereby, the movement planning device 1 can generate a movement group which includes one or more movement sequences and in which all of the included movement sequences are determined to be physically executable so as to reach a target state from a start state. … The generated movement group is equivalent to a movement plan for the robot device R for performing a task (that is, for reaching a target state from a start state). The movement planning device 1 outputs the movement group generated using the motion planner 5. The outputting of the movement group may include controlling the movement of the robot device R by giving the robot device R an instruction indicating the movement group.” See at least [0046]; Also see at least [0111-0112] for controlling the robot based on the movement group.)
acquire work related information relating to a work of a robot (“a movement planning device according to an aspect of the present invention includes an information acquisition part configured to acquire task information including information on a start state and a target state of a task given to a robot device, an action generation part configured to generate an abstract action sequence including one or more abstract actions arranged in an order of execution so as to reach the target state from the start state based on the task information by using a symbolic planner.” See at least [0010]; Examiner Interpretation: Work related information is acquired by acquiring the task information and by generating an abstract action sequence.)
segment the work related information according to execution periods of the subtasks; and set, to segmented pieces of the work related information, identifiers which at least represent the subtasks, respectively. (“the movement planning device 1 generates an abstract action sequence including one or more abstract actions arranged in order of execution … the movement planning device 1 generates a movement sequence for performing abstract actions in order of execution.” See at least [0040]; “As illustrated in FIG. 8, the information on the movement sequence may include, for example, identification information (movement ID) of each movement, identification information (parent movement ID) of a movement (parent movement) executed before each action, instruction information (for example, a control amount such as a trajectory) for giving an instruction for each movement to the robot device R, and the like. The movement ID and the parent movement ID may be used to specify the order of execution of each movement.” See at least [0100] and fig. 8 (provided below); Examiner Interpretation: The IDs are set as identifiers in the movement sequence generation. The movements of the movement IDs are subtasks. The work related information is segmented based on execution periods as shown by intermediate states and by a specified order of execution.)
PNG
media_image1.png
520
753
media_image1.png
Greyscale
von Drigalski does not explicitly teach, but Barajas teaches
acquire work related information relating to a work of a robot generated during an execution of the sequence of subtasks; (“At step 106, the operator then physically moves the robot across its configuration space (C). For instance, the arm 16 and/or the manipulator 20 may be moved either manually by direct contact and an applied force, or indirectly via the input device 13 of FIG. 1, or using a combination of the two. This moves the arm 16 and manipulator 20 to the desired position. At step 108, the raw sensor data (arrow 15) of FIG. 1 is fed to the ECU 22 to provide performance and state value information, possibly including but not limited to force and torque applied to the manipulator 20. The perceptual sensors 25 can also be used to determine approach and exit angles, i.e., the angle at which the manipulator 20 respectively approaches and moves away from the object 23 at the grasp and release stages of the task. Step 108 may entail capturing data sequences of positions of the manipulator 20 from the operator-controlled movements of the robot 10, possibly also using the perceptual sensors 25. … At step 112, the ECU 22 controls the robot 10 in a subsequent task using the markers of step 110 to guide the recorded motor schema 28. The robot 10 can thus repeat the learned maneuver using the recorded markers and schema, with the schema defining task primitives such as "pick up object", "drop off object", "move from point A to point B", etc.” See at least [0035-0038])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of von Drigalski to further include the teachings of Barajas with a reasonable expectation of success for improved robot training enabling faster retraining of a robotic workforce for changing environments and different complex tasks. (See at least [0081-0083])
Regarding Claim 3,
von Drigalski further teaches
wherein the robot executes a sequence of the subtasks, (“The generated movement group is equivalent to a movement plan for the robot device R for performing a task (that is, for reaching a target state from a start state). The movement planning device 1 outputs the movement group generated using the motion planner 5. The outputting of the movement group may include controlling the movement of the robot device R by giving the robot device R an instruction indicating the movement group.” See at least [0046], wherein the movement group is a sequence of subtasks.)
the sequence being generated based on a logical formula representing, according to a temporal logic, an objective task to be executed by the robot. (“an abstract action sequence (that is, an abstract action plan) from the start state to the target state of the task is generated by using the symbolic planner. In one example, the abstract action is a set of arbitrary movements including one or more movements of the robot device, and may be defined as a set of movements that can be expressed by symbols (for example, words or the like). … Next, in this configuration, by using a motion planner, a movement sequence for performing abstract actions is generated in order of execution (that is, the abstract actions are converted into the movement sequence).” See at least [0011-0012]; Examiner Interpretation: The abstract action sequence is interpreted as the logical formula. The movement sequence is generated based on the abstract action sequence.)
Regarding Claim 4,
von Drigalski further teaches
wherein the identifier is information for identifying the subtask and an objective task to be executed by the robot, and wherein the at least one processor is configured to execute the instructions to set, to each of the work related information segmented based on the execution period of each subtask, the identifier indicative of the each subtask and the objective task. (“As illustrated in FIG. 8, the information on the movement sequence may include, for example, identification information (movement ID) of each movement, identification information (parent movement ID) of a movement (parent movement) executed before each action, instruction information (for example, a control amount such as a trajectory) for giving an instruction for each movement to the robot device R, and the like. The movement ID and the parent movement ID may be used to specify the order of execution of each movement.” See at least [0100] and fig. 8; “outputting the movement group may include controlling a movement of the robot device by giving an instruction indicating the movement group to the robot device.” See at least [0021]; Examiner Interpretation: The movements of the movement IDs are subtasks and at least the control amount/trajectory is an objective task. Controlling the movement of the robot device with the movement group demonstrates that the robot can accept these movements. The work related information is segmented based on execution periods as shown by intermediate states and by a specified order of execution.)
Regarding Claim 5,
von Drigalski further teaches
wherein the at least one processor is configured to execute the instructions to determine whether or not a collection condition, which is a condition for making a determination of collecting the work related information, is satisfied, (“the process of generating a movement plan for the robot device R is divided into two stages, that is, an abstract stage using the symbolic planner 3 and a physical stage using the motion planner 5, and a movement plan is generated while exchanging between the two planners (3 and 5). … processing for generating a movement sequence by the motion planner 5 is configured to use a processing result of the symbolic planner 3 (that is, the processing is executed after the processing of the symbolic planner 3 is executed).” See at least [0047]; “The movement generation part 113 is configured to … determine whether the generated movement sequence is physically executable in the real environment by the robot device R.” See at least [0056]; “In a case where the movement generation part 113 determines that a movement sequence is physically inexecutable, the movement planning device 1 discards an abstract action sequence after an abstract action corresponding to a movement sequence determined to be physically inexecutable, … The output part 114 is configured to output a movement group which includes one or more movement sequences generated using the motion planner 5 and in which all of the included movement sequences are determined to be physically executable.” See at least [0057]; Examiner Interpretation: The collection condition is whether or not the movement sequence is executable because it determines whether the abstract action sequence (work related information) received from the symbolic planner is to be used or discarded.)
and wherein the at least one processor is configured to execute the instructions to set the identifier to the work related information if the collection condition is satisfied. (See at least fig. 8 and the corresponding description of the identification information (movement ID) in at least [0100]; Examiner Interpretation: When the movement sequence is executable, identifiers are set to the abstract action sequence by setting movement IDs corresponding to abstract states.)
Regarding Claim 6,
von Drigalski further teaches
wherein the at least one processor is configured to execute the instructions to discard the work related information (“In a case where an abstract action plan generated by the symbolic planner 3 is inexecutable in the real environment (that is, the abstract action sequence includes an abstract action that is inexecutable in the real environment), a movement sequence generated for the abstract action to be the cause thereof is determined to be physically inexecutable in the processing of the motion planner 5. In this case, the movement planning device 1 discards an abstract action sequence after the abstract action corresponding to the movement sequence determined to be physically inexecutable.” See at least [0044], wherein the collection condition is not satisfied when the abstract action plan is determined to be inexecutable.)
Regarding Claim 8,
von Drigalski does not explicitly teach, but Barajas teaches
wherein the at least one processor is configured to execute the instructions to acquire robot configuration information regarding a configuration of the robot. (“At step 108, the raw sensor data (arrow 15) of FIG. 1 is fed to the ECU 22 to provide performance and state value information, possibly including but not limited to force and torque applied to the manipulator 20. The perceptual sensors 25 can also be used to determine approach and exit angles, i.e., the angle at which the manipulator 20 respectively approaches and moves away from the object 23 at the grasp and release stages of the task. Step 108 may entail capturing data sequences of positions of the manipulator 20 from the operator-controlled movements of the robot 10, possibly also using the perceptual sensors 25.” See at least [0036-0037])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified von Drigalski to further include the teachings of Barajas with a reasonable expectation of success for improved robot training enabling faster retraining of a robotic workforce for changing environments and different complex tasks. (See at least [0081-0083])
Regarding Claim 9,
von Drigalski teaches
An information collecting method the information collecting method comprising: (“The present invention relates to a movement planning device, a movement planning method, and a movement planning program for planning movements of a robot device.” See at least [0001])
controlling a robot based on a sequence of subtasks obtained by decomposing an objective task to be executed by the robot; (“Thereby, the movement planning device 1 can generate a movement group which includes one or more movement sequences and in which all of the included movement sequences are determined to be physically executable so as to reach a target state from a start state. … The generated movement group is equivalent to a movement plan for the robot device R for performing a task (that is, for reaching a target state from a start state). The movement planning device 1 outputs the movement group generated using the motion planner 5. The outputting of the movement group may include controlling the movement of the robot device R by giving the robot device R an instruction indicating the movement group.” See at least [0046]; Also see at least [0111-0112] for controlling the robot based on the movement group.)
acquiring work related information relating to a work of a robot (“a movement planning device according to an aspect of the present invention includes an information acquisition part configured to acquire task information including information on a start state and a target state of a task given to a robot device, an action generation part configured to generate an abstract action sequence including one or more abstract actions arranged in an order of execution so as to reach the target state from the start state based on the task information by using a symbolic planner.” See at least [0010]; Examiner Interpretation: Work related information is acquired by acquiring the task information and by generating an abstract action sequence.)
segmenting the work related information according to execution periods of the subtasks; and setting, to segmented pieces of the work related information, identifiers which at least represent the subtasks, respectively. (“the movement planning device 1 generates an abstract action sequence including one or more abstract actions arranged in order of execution … the movement planning device 1 generates a movement sequence for performing abstract actions in order of execution.” See at least [0040]; “As illustrated in FIG. 8, the information on the movement sequence may include, for example, identification information (movement ID) of each movement, identification information (parent movement ID) of a movement (parent movement) executed before each action, instruction information (for example, a control amount such as a trajectory) for giving an instruction for each movement to the robot device R, and the like. The movement ID and the parent movement ID may be used to specify the order of execution of each movement.” See at least [0100] and fig. 8; Examiner Interpretation: The IDs are set as identifiers in the movement sequence generation. The movements of the movement IDs are subtasks. The work related information is segmented based on execution periods as shown by intermediate states and by a specified order of execution.)
von Drigalski does not explicitly teach, but Barajas teaches
acquiring work related information relating to a work of a robot generated during an execution of the sequence of subtasks; (“At step 106, the operator then physically moves the robot across its configuration space (C). For instance, the arm 16 and/or the manipulator 20 may be moved either manually by direct contact and an applied force, or indirectly via the input device 13 of FIG. 1, or using a combination of the two. This moves the arm 16 and manipulator 20 to the desired position. At step 108, the raw sensor data (arrow 15) of FIG. 1 is fed to the ECU 22 to provide performance and state value information, possibly including but not limited to force and torque applied to the manipulator 20. The perceptual sensors 25 can also be used to determine approach and exit angles, i.e., the angle at which the manipulator 20 respectively approaches and moves away from the object 23 at the grasp and release stages of the task. Step 108 may entail capturing data sequences of positions of the manipulator 20 from the operator-controlled movements of the robot 10, possibly also using the perceptual sensors 25. … At step 112, the ECU 22 controls the robot 10 in a subsequent task using the markers of step 110 to guide the recorded motor schema 28. The robot 10 can thus repeat the learned maneuver using the recorded markers and schema, with the schema defining task primitives such as "pick up object", "drop off object", "move from point A to point B", etc.” See at least [0035-0038])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of von Drigalski to further include the teachings of Barajas with a reasonable expectation of success for improved robot training enabling faster retraining of a robotic workforce for changing environments and different complex tasks. (See at least [0081-0083])
Regarding Claim 10,
von Drigalski teaches
A non-transitory computer readable storage medium storing a program executed by a computer, the program causing the computer to: (“A non-transitory computer readable medium, storing a movement planning program causing a computer to execute steps as follows,” See at least Claim 12)
control a robot based on a sequence of subtasks obtained by decomposing an objective task to be executed by the robot; (“Thereby, the movement planning device 1 can generate a movement group which includes one or more movement sequences and in which all of the included movement sequences are determined to be physically executable so as to reach a target state from a start state. … The generated movement group is equivalent to a movement plan for the robot device R for performing a task (that is, for reaching a target state from a start state). The movement planning device 1 outputs the movement group generated using the motion planner 5. The outputting of the movement group may include controlling the movement of the robot device R by giving the robot device R an instruction indicating the movement group.” See at least [0046]; Also see at least [0111-0112] for controlling the robot based on the movement group.)
acquire work related information relating to a work of a robot (“a movement planning device according to an aspect of the present invention includes an information acquisition part configured to acquire task information including information on a start state and a target state of a task given to a robot device, an action generation part configured to generate an abstract action sequence including one or more abstract actions arranged in an order of execution so as to reach the target state from the start state based on the task information by using a symbolic planner.” See at least [0010]; Examiner Interpretation: Work related information is acquired by acquiring the task information and by generating an abstract action sequence.)
segment the work related information according to execution periods of the subtasks; and set, to segmented pieces of the work related information, identifiers which at least represent the subtasks, respectively. (“the movement planning device 1 generates an abstract action sequence including one or more abstract actions arranged in order of execution … the movement planning device 1 generates a movement sequence for performing abstract actions in order of execution.” See at least [0040]; “As illustrated in FIG. 8, the information on the movement sequence may include, for example, identification information (movement ID) of each movement, identification information (parent movement ID) of a movement (parent movement) executed before each action, instruction information (for example, a control amount such as a trajectory) for giving an instruction for each movement to the robot device R, and the like. The movement ID and the parent movement ID may be used to specify the order of execution of each movement.” See at least [0100] and fig. 8; Examiner Interpretation: The IDs are set as identifiers in the movement sequence generation. The movements of the movement IDs are subtasks. The work related information is segmented based on execution periods as shown by intermediate states and by a specified order of execution.)
von Drigalski does not explicitly teach, but Barajas teaches
acquire work related information relating to a work of a robot generated during an execution of the sequence of subtasks; (“At step 106, the operator then physically moves the robot across its configuration space (C). For instance, the arm 16 and/or the manipulator 20 may be moved either manually by direct contact and an applied force, or indirectly via the input device 13 of FIG. 1, or using a combination of the two. This moves the arm 16 and manipulator 20 to the desired position. At step 108, the raw sensor data (arrow 15) of FIG. 1 is fed to the ECU 22 to provide performance and state value information, possibly including but not limited to force and torque applied to the manipulator 20. The perceptual sensors 25 can also be used to determine approach and exit angles, i.e., the angle at which the manipulator 20 respectively approaches and moves away from the object 23 at the grasp and release stages of the task. Step 108 may entail capturing data sequences of positions of the manipulator 20 from the operator-controlled movements of the robot 10, possibly also using the perceptual sensors 25. … At step 112, the ECU 22 controls the robot 10 in a subsequent task using the markers of step 110 to guide the recorded motor schema 28. The robot 10 can thus repeat the learned maneuver using the recorded markers and schema, with the schema defining task primitives such as "pick up object", "drop off object", "move from point A to point B", etc.” See at least [0035-0038])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of von Drigalski to further include the teachings of Barajas with a reasonable expectation of success for improved robot training enabling faster retraining of a robotic workforce for changing environments and different complex tasks. (See at least [0081-0083])
Regarding Claim 11,
von Drigalski does not explicitly teach, but Barajas teaches
wherein the at least one processor is configured to execute the instructions to: recognize the execution periods of the subtasks based on: motion planning information indicating a sequence of time steps of the sequence of the subtasks or log information regarding the subtasks actually-executed by the robot. (“Given a set of known skills with recognizer functions R, wherein R returns the earliest time step at which a skill is completed, the following iterative method 200 usable as part of method 100 parses the training data stream, T, to identify robotic skills. After starting (*), step 202 includes using the ECU 22 of FIG. 1 to run all recognizer functions R to find the particular motor schema 28, i.e., skill i, which happens first, and also the point in time, ts.sub.a, at which the recognized skill is finished. For instance, a robot 10 may know three different grasp types for an object in the form of a cube, and thus the three grasps represent three schema or skills. … At step 208, the ECU 22 determines whether any additional actions are detected in the data stream T. If additional actions are detected, the method 200 repeats step 202.” See at least [0041-0044]; “Implementation of the recognition function for a basic grasp skill is straightforward, as there is a specific, detectable point in time at which the robot 10 transitions from an open gripper to a closed gripper. If feedback from the manipulator 20 is available, then the detected presence of an object within the manipulator 20 can be integrated into the recognizer. The time step at which this transition occurs in the data stream T is represented as the grasp point ts.sub.grasp. This time plus a constant offset is returned to the learning module 41 of FIG. 4 to indicate the detected completion of a recognized skill.” See at least [0055])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified von Drigalski to further include the teachings of Barajas with a reasonable expectation of success for improved robot training enabling faster retraining of a robotic workforce for changing environments and different complex tasks. (See at least [0081-0083])
Regarding Claim 12,
von Drigalski does not explicitly teach, but Barajas teaches
wherein the work related information comprises: measurement information, during the execution, generated by a measurement device provided in a workplace of the robot; and an operation status of the robot during the execution. (“At step 106, the operator then physically moves the robot across its configuration space (C). … At step 108, the raw sensor data (arrow 15) of FIG. 1 is fed to the ECU 22 to provide performance and state value information, possibly including but not limited to force and torque applied to the manipulator 20. The perceptual sensors 25 can also be used to determine approach and exit angles, i.e., the angle at which the manipulator 20 respectively approaches and moves away from the object 23 at the grasp and release stages of the task. Step 108 may entail capturing data sequences of positions of the manipulator 20 from the operator-controlled movements of the robot 10, possibly also using the perceptual sensors 25. … At step 112, the ECU 22 controls the robot 10 in a subsequent task using the markers of step 110 to guide the recorded motor schema 28. The robot 10 can thus repeat the learned maneuver using the recorded markers and schema, with the schema defining task primitives such as "pick up object", "drop off object", "move from point A to point B", etc.” See at least [0035-0038])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified von Drigalski to further include the teachings of Barajas with a reasonable expectation of success for improved robot training enabling faster retraining of a robotic workforce for changing environments and different complex tasks. (See at least [0081-0083])
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over von Drigalski (US 20230330854 A1) in view of Barajas (US 20130245824 A1), Toritani (US 20220212346 A1), and Smith (US 20210094376 A1).
Regarding Claim 7,
von Drigalski further teaches
wherein the robot is one or more robots (“The robot device R may be constituted by a plurality of robots.” See at least [0039];
von Drigalski does not explicitly teach, but Toritani teaches
wherein the robot is one or more robots provided in each of plural environments, and wherein, in each of plural environments, there is provided a task execution system which includes the one or more robots, (“In the production site F, a plurality of production facilities (see below) constituting the mounting board production line L and a plurality of robot systems 31, 32, 33 are arranged. Each robot system includes at least one automatic work robot and a management device for managing operation of the automatic work robot.” See at least [0032], wherein the robots are in plural environments by being in different area of the production site (See at least fig. 1 (provided below)), and a management device is a task execution system.)
PNG
media_image2.png
718
472
media_image2.png
Greyscale
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified von Drigalski to further include the teachings of Toritani with a reasonable expectation of success “to effectively support sophisticated cooperation of operations between robot systems provided by a plurality of different vendors in a production site such as a factory and to improve versatility and expandability as a system.” (See at least [0010])
Toritani also does not explicitly teach, but Smith teaches
and wherein the at least one processor is configured to execute the instructions to receive the work related information from each of the task execution systems, (“The communication system 518 can operate to receive orders from software applications and can communicate with the robots 101. The database 516 can store the orders and can also store information for each of the robots 101, including, for example, vehicle ID, storage/staging ID, base /repair/maintenance station ID, locations, equipment inventory, goods inventory etc. Different robots101 may have different equipment and goods offerings. … The autonomous robots 101 can communicate their locations to the server 510, which can store the location information in the database 516. The listing of database information in FIG. 13 is exemplary and other information regarding orders or robots 101 can be stored in the database 516.” See at least [0103], wherein the robots are task execution systems and information communicated from the robots to the server (e.g., location) is work related information.)
and wherein the at least one processor is configured to execute the instructions to set the identifier indicative of the task executed by the one or more robots to the work related information received from each of the task execution systems. (“With continuing reference to FIG. 12, when the server 510 receives an order, the server 510 determines which robots 101 may be capable of fulfilling the order. The delivery management system 510 can access the database 516 to access the equipment inventory and goods/services inventory for the autonomous vehicle 101, and to determine which vehicle 101 include the ordered item/service and the appropriate equipment for handling, transporting, delivering, etc. the goods/services. In various embodiments, the server 510 can additionally access the location of each vehicle 101. … the server 510 can assign the order to the eligible vehicle 101 that is closest to the delivery destination. In various embodiments, the server 510 can assign the order to the eligible vehicle 101 that has the fewest number of orders. In various embodiments, the server 510 can assign the order to the eligible vehicle 101 that has the optimal balance of distance and number of orders for delivery.” See at least [0105]; Examiner Interpretation: Assigning orders to robots/vehicles is equivalent to setting the identifiers.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified von Drigalski and Toritani to further include the teachings of Smith with a reasonable expectation of success to improve coordination of the robot fleet and optimize tasks. (See at least [0105] and [0126])
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Karston G Evans whose telephone number is (571)272-8480. The examiner can normally be reached Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571)270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.G.E./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657