DETAILED ACTION
This office action is in response to the amendment filed on 1/16/2026. In the amendment, claims 1, 3 and 9 have been amended, and claim 6 is now canceled. Overall, claims 1-5 and 7-16 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-5 and 7-16 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent to 12,008,496 B1 to Vasanth et. al. (Vasanth) in view of Pub No. US 2018/0137452 A1 to Khatravath et. al. (Khatravath) and further in view of Pub No. US 2015/0203304 A1 to Morency et. al. (Morency).
In Reference to Claim 1
Vasanth teaches (except for the bolded and italic recitations below):
A scheduling system (114), being configured to manage at least one intelligent mobile robot (122) according to a state of a working table (Vasanth teaches workspace (104) which can be such as stow stations, picking station, etc), the scheduling system (114) comprising a processor (118) (see at least Vasanth Fig.1 and column 7 lines 16-67 and column 8 lines 1-9), the processor (118) being configured to:
obtain from a camera an image about at least one working region (100) of the working table (104), where in the camera positioned above the working table (104) and configured to capture an image about at least one working region (100) of the working table (104) in real time (see at least Vasanth Figs.1 and column 8 lines 65-67 and column 9 lines 1-22, “The location 128 may correspond to a location of the agent 122 within the environment 100. The location 128 may also indicate, or be associated with, a proximity of the agent 122 to the task 116 being assigned. For example, the tasks 116 may be associated with certain locations in the environment 100 (e.g., moving item(s) from one location to another) and these locations may be used to determine which agent 122 is to perform the task 116 (e.g., closest available). In some instances, the location 128 of the agents 122 may be tracked throughout the environment 100 and/or determined using a global position system (GPS), location sensors, local beacons, spatial grid systems, triangulation systems, camera(s), pinging the agents, and the like”);
receive a query request from the at least one intelligent mobile robot (122), wherein the inquiry request comprises a request to proceed to the at least one working region to pick up cargo, transport cargo or collect an empty pallet (see at least Vasanth Figs.1 and 5 and column 10 lines 55-67 and column 11 lines 1-9 and lines 60-67 and column 22 and lines 61-65 “The work manager component 134 may be configured to receive the tasks 116 or requests for the tasks 116 to be assigned (or generated). These requests or the task(s) 116 may be received from the agents 122, the agent devices 132, the stations, or from other components in the environment 100. The work manager component 134 may organize the tasks 116 submitted and may function to translate (e.g., decipher) the incoming requests. The work manager component 134 may also understand the needs of the stations for interpreting the requesting and organizing the tasks 116. In this sense, as tasks 116 are needed to be carried out within the environment 100, the work manager component 134 may identify the tasks 116 and organize the tasks 116 such the tasks are assignable to the agents 122. In some instances, some of the tasks 116 may have higher priorities than others, or may be of more importance than others. In some instances, the work manager component 134 may identify which of the tasks 116 are of a higher priority (e.g., based on the nature of the task 116, the station submitting the task 116, etc.). In some instance, the work manager component 134 may dedupe the requests, or the tasks 116 being received to generate or split a task into multiple subtasks” and “In some instances, the task manager component 138 may receive a request to assign a particular agent 122 the tasks (e.g., request pallet jack to move pallet) and/or may determine which agent 122 is compatible to handle the task 116. The task manager component 138 may also prioritize the tasks 116 such that some requests or tasks 116 may be handled sooner than others. Such prioritization may be determined via the work manager component 134 and/or the task generator component 136 for use by the fleet management system 114 in selecting the agent 122” and “At 502, the process 500 may receive a request for resources. For example, the work manager component 134 may receive a request to supply totes to a picking station 106, to remove totes from a packaging station 108, to move inventory 102, and so forth”);
analyze the obtained image in response to the query request to recognize a current state of each working region of the at least one working region (100) (see at least Vasanth Fig. 2 and column 18 lines 28-35 “As such, the fleet management system 114 is configured to digest all the supply and demand needs from the various stations in the environment 100 and convert them to tasks 116. As part of this process, the fleet management system 114 may utilize information about the agent location, station status, and/or station locations. The fleet management system 114 may also make the optimal choice the agent 122 for carrying out the task 116); and
select a corresponding working region suitable for the at least one intelligent mobile robot from the at least one working region based on the query request and the recognized current state of each working region, and send a corresponding scheduling instruction to the at least one intelligent mobile robot to instruct the at least one intelligent mobile robot to move to the selected corresponding working region (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26 “The fleet management system may utilize any number of characteristics when assigning the tasks. For example, the fleet management system may factor in a location of the agent(s) within the facility when assigning task(s). The location may be used to determine a closest agent to the task being assigned, or where the task is to be completed within the facility, for carrying out the task in a timely manner. In some instances, the fleet management system may store a current location of the agent(s) within the facility, and based on the location, the fleet management system may determine which agent is able to accomplish the task in the least amount of time. In some instances, the fleet management system may maintain a real-time geographical database of the various locations of the agents and/or locations corresponding to the task at hand. The real-time geographical location may be determined from monitoring the agents, receiving indications from the agents, or in other manners” and “In some instances, the task manager component 138 may select the first agent and the second agent, respectively, by leveraging information about station locations (e.g., picking stations 106, packaging stations 108, etc.), station statuses (e.g., number of totes, number of boxes/packaging supplies, etc.), the locations 128 of the agents 122, work allotment, fitness function(s) 144, etc. In some instances, the task manager component 138 may make optimal allocations of the tasks 116 to reduce downtime and/or efficient use of resources. For example, the task manager component 138 may determine those agents 122 that are on standby or waiting for assignments. By identifying these agents 122, the task manager component 138 is able to assign tasks 116 to increase throughput and reduce overhead expenses” and “At 504, the process 500 may generate a first task and a second task associated with completing the request. For example, the task generator component 136 may generate a first task and a second task associated with completing the request. In other words, to carry out the request, multiple tasks may be performed. For example, if the request is to supply totes to a picking station, the first task may be to load totes onto a carrier (e.g., robotic agent) and the second task may be to move the totes to the picking station. At 506, the process 500 may select a first agent for completing the first task. For example, the task manager component 138 may select a first agent, such as a human agent, for loading the totes. In some instances, the human agent may be more adept or capable of loading the totes, as compared to a robotic agent. At 508, the process 500 may select a second agent for completing the second task. For example, the task manager component 138 may select a second agent, such as a robotic agent, for moving the totes. In some instances, using the robotic agent to move the totes may be more efficient than instructing a human agent to perform the task. Additionally, using the robotic agents for these types of tasks (e.g., transporting) may allow human agents to perform more complicated or more involved tasks that robotic agents are not well skilled to perform. As such, the process 500 may select agents 122 based on their capableness such that task assignment may be optimized. The process 500 may also select the agents 122 based on one or more fitness function(s) 144”).
Vasanth teaches to use the camera to determine the status of the about at least one working region (100) of the working table (104). However Vasanth does not explicitly teaches (bolded and italic recitations above) as to obtain from a camera an image about at least one working region (100) of the working table (104), where in the camera positioned above the working table (104) and configured to capture an image about at least one working region (100) of the working table (104) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region of the at least one working region (100). However, it is known in the art before the effective filing date of the claimed invention to obtain from a camera an image about at least one working region of the working table (working area) and configured to capture an image about at least one working region of the working table (working area) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region of the at least one working region. For example, Khatravath teaches to obtain from a camera an image about at least one working region of the working table (working point) and configured to capture an image about at least one working region (location) of the working table (working point) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region (location) of the at least one working region (location). Khatravath further teaches that performing such steps provides accurate tracking of intended items in real time (see at least Khatravath Figs. 1-5 and paragraphs 6-8, 20-21, 27, 35, 45, 48). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vasanth to perform the steps of obtain from a camera an image about at least one working region of the working table (working point) and configured to capture an image about at least one working region (location) of the working table (working point) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region (location) of the at least one working region (location) as taught by Khatravath in order to provide accurate tracking of intended items in real time.
Vasanth in view of Khatravath does not explicitly teaches (bolded and italic recitations above) as to the camera are above the working table (working point). However, it is known in the art before the effective filing date of the claimed invention to have the camera above the item that the camera is acquiring the images. For example, Morency teaches to have camera (19) is located above the work table. Morency further teaches that having such location of the camera provides accurate acquisition of image of the working table (see at least Morency Fig.1 and paragraphs 27-28 and 61-63). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vasanth in view of Khatravath to place the camera above the working table as taught by Morency in order to accurately acquisition of image of the working table.
In Reference to Claim 2
The scheduling system according to claim 1 (see rejection to claim 1 above), wherein analyzing the obtained image to recognize the current state of each working region of the at least one working region comprises: analyzing a current frame of the obtained image to recognize a corresponding number of each working region in the image and determine whether the state of each working region with the corresponding number is an available state or an occupied state (see at least Morency Fig.1 and paragraphs 27-28 and 61-63)
In Reference to Claim 3
The scheduling system according to claim 2 (see rejection to claim 2 above), wherein the available state comprises an empty cargo space state or an empty pallet state (see at least Morency Fig.1 and paragraphs 27-28 and 61-63).
In Reference to Claim 4
The scheduling system according to claim 1 (see rejection to claim 1 above), wherein selecting a corresponding working region (104) suitable for the at least one intelligent mobile robot (122) from the at least one working region (100) based on the query request and the recognized current state of each working region (100) comprises: determining a current task of the at least one intelligent mobile robot (122) at least according to the query request of the at least one intelligent mobile robot (122), and selecting from the at least one working region (100) a suitable working region matching the current task of the at least one intelligent mobile robot (122) based on the current task of the at least one intelligent mobile robot (122) and the recognized current state of each working region (100) (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 5
The scheduling system according to claim 4 (see rejection to claim 4 above), wherein the processor (118) is further configured to:
when it is determined that the current task of the intelligent mobile robot (122) is to transport cargo, send the scheduling instruction to instruct the intelligent mobile robot (122) to move to a working region in the empty cargo space state; and/or when it is determined that the current task of the intelligent mobile robot is to pick up an empty pallet, send the scheduling instruction to instruct the intelligent mobile robot to move to a working region in the empty pallet state (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 7
The scheduling system according to claim 1 (see rejection to claim 1 above), further comprising at least one intelligent mobile robot (122), the at least one intelligent mobile robot (122) being configured to: send the query request in a case of the intelligent mobile robot (122) at a predetermined distance from the working table, when the current task of the intelligent mobile robot (122) is to transport cargo; and/or immediately send the query request in a case when the current task of the intelligent mobile robot (122) is to pick up an empty pallet (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 8
The scheduling system according to claim 4 (see rejection to claim 4 above), wherein the processor (118) is further configured to:
in the case when the current task of the intelligent mobile robot (122) is to transport cargo, instruct the intelligent mobile robot (122) to wait at a predetermined position and regularly analyze the image of the working table (working location) to monitor the state of the at least one working region if it is determined according to the recognized current state of each working region that there is currently no working region in the empty cargo space state, and instruct the intelligent mobile robot (122) to move to a working region in the empty cargo space state if a working region in the empty cargo space state is found; and/or in the case when the current task of the intelligent mobile robot (122) is to pick up an empty pallet, instruct the intelligent mobile robot to move to an empty-pallet storage region to pick up an empty pallet if it is determined according to the recognized current state of each working region that there is currently no working region in the empty pallet state (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 9
Vasanth teaches (except for the bolded and italic recitations below):
A scheduling method, being configured to manage at least one intelligent mobile robot (122) according to a state of a working table (Vasanth teaches workspace (104) which can be such as stow stations, picking station, etc) (see at least Vasanth Fig.1 and column 7 lines 16-67 and column 8 lines 1-9), the scheduling method comprising executing computer instructions to perform the following operations:
obtaining from a camera an image about at least one working region (100) of the working table (104), where in the camera positioned above the working table (104) and configured to capture an image about at least one working region (100) of the working table (104) in real time (see at least Vasanth Figs.1 and column 8 lines 65-67 and column 9 lines 1-22, “The location 128 may correspond to a location of the agent 122 within the environment 100. The location 128 may also indicate, or be associated with, a proximity of the agent 122 to the task 116 being assigned. For example, the tasks 116 may be associated with certain locations in the environment 100 (e.g., moving item(s) from one location to another) and these locations may be used to determine which agent 122 is to perform the task 116 (e.g., closest available). In some instances, the location 128 of the agents 122 may be tracked throughout the environment 100 and/or determined using a global position system (GPS), location sensors, local beacons, spatial grid systems, triangulation systems, camera(s), pinging the agents, and the like”);
receiving a query request from the at least one intelligent mobile robot (122), wherein the inquiry request comprises a request to proceed to the at least one working region to pick up cargo, transport cargo or collect an empty pallet (see at least Vasanth Figs.1 and 5 and column 10 lines 55-67 and column 11 lines 1-9 and lines 60-67 and column 22 and lines 61-65 “The work manager component 134 may be configured to receive the tasks 116 or requests for the tasks 116 to be assigned (or generated). These requests or the task(s) 116 may be received from the agents 122, the agent devices 132, the stations, or from other components in the environment 100. The work manager component 134 may organize the tasks 116 submitted and may function to translate (e.g., decipher) the incoming requests. The work manager component 134 may also understand the needs of the stations for interpreting the requesting and organizing the tasks 116. In this sense, as tasks 116 are needed to be carried out within the environment 100, the work manager component 134 may identify the tasks 116 and organize the tasks 116 such the tasks are assignable to the agents 122. In some instances, some of the tasks 116 may have higher priorities than others, or may be of more importance than others. In some instances, the work manager component 134 may identify which of the tasks 116 are of a higher priority (e.g., based on the nature of the task 116, the station submitting the task 116, etc.). In some instance, the work manager component 134 may dedupe the requests, or the tasks 116 being received to generate or split a task into multiple subtasks” and “In some instances, the task manager component 138 may receive a request to assign a particular agent 122 the tasks (e.g., request pallet jack to move pallet) and/or may determine which agent 122 is compatible to handle the task 116. The task manager component 138 may also prioritize the tasks 116 such that some requests or tasks 116 may be handled sooner than others. Such prioritization may be determined via the work manager component 134 and/or the task generator component 136 for use by the fleet management system 114 in selecting the agent 122” and “At 502, the process 500 may receive a request for resources. For example, the work manager component 134 may receive a request to supply totes to a picking station 106, to remove totes from a packaging station 108, to move inventory 102, and so forth”);
analyzing the obtained image in response to the query request to recognize a current state of each working region of the at least one working region (100) (see at least Vasanth Fig. 2 and column 18 lines 28-35 “As such, the fleet management system 114 is configured to digest all the supply and demand needs from the various stations in the environment 100 and convert them to tasks 116. As part of this process, the fleet management system 114 may utilize information about the agent location, station status, and/or station locations. The fleet management system 114 may also make the optimal choice the agent 122 for carrying out the task 116); and
selecting a corresponding working region suitable for the at least one intelligent mobile robot from the at least one working region based on the query request and the recognized current state of each working region, and send a corresponding scheduling instruction to the at least one intelligent mobile robot to instruct the at least one intelligent mobile robot to move to the selected corresponding working region (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26 “The fleet management system may utilize any number of characteristics when assigning the tasks. For example, the fleet management system may factor in a location of the agent(s) within the facility when assigning task(s). The location may be used to determine a closest agent to the task being assigned, or where the task is to be completed within the facility, for carrying out the task in a timely manner. In some instances, the fleet management system may store a current location of the agent(s) within the facility, and based on the location, the fleet management system may determine which agent is able to accomplish the task in the least amount of time. In some instances, the fleet management system may maintain a real-time geographical database of the various locations of the agents and/or locations corresponding to the task at hand. The real-time geographical location may be determined from monitoring the agents, receiving indications from the agents, or in other manners” and “In some instances, the task manager component 138 may select the first agent and the second agent, respectively, by leveraging information about station locations (e.g., picking stations 106, packaging stations 108, etc.), station statuses (e.g., number of totes, number of boxes/packaging supplies, etc.), the locations 128 of the agents 122, work allotment, fitness function(s) 144, etc. In some instances, the task manager component 138 may make optimal allocations of the tasks 116 to reduce downtime and/or efficient use of resources. For example, the task manager component 138 may determine those agents 122 that are on standby or waiting for assignments. By identifying these agents 122, the task manager component 138 is able to assign tasks 116 to increase throughput and reduce overhead expenses” and “At 504, the process 500 may generate a first task and a second task associated with completing the request. For example, the task generator component 136 may generate a first task and a second task associated with completing the request. In other words, to carry out the request, multiple tasks may be performed. For example, if the request is to supply totes to a picking station, the first task may be to load totes onto a carrier (e.g., robotic agent) and the second task may be to move the totes to the picking station. At 506, the process 500 may select a first agent for completing the first task. For example, the task manager component 138 may select a first agent, such as a human agent, for loading the totes. In some instances, the human agent may be more adept or capable of loading the totes, as compared to a robotic agent. At 508, the process 500 may select a second agent for completing the second task. For example, the task manager component 138 may select a second agent, such as a robotic agent, for moving the totes. In some instances, using the robotic agent to move the totes may be more efficient than instructing a human agent to perform the task. Additionally, using the robotic agents for these types of tasks (e.g., transporting) may allow human agents to perform more complicated or more involved tasks that robotic agents are not well skilled to perform. As such, the process 500 may select agents 122 based on their capableness such that task assignment may be optimized. The process 500 may also select the agents 122 based on one or more fitness function(s) 144”).
Vasanth teaches to use the camera to determine the status of the about at least one working region (100) of the working table (104). However Vasanth does not explicitly teaches (bolded and italic recitations above) as to obtain from a camera an image about at least one working region (100) of the working table (104), where in the camera positioned above the working table (104) and configured to capture an image about at least one working region (100) of the working table (104) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region of the at least one working region (100). However, it is known in the art before the effective filing date of the claimed invention to obtain from a camera an image about at least one working region of the working table (working area) and configured to capture an image about at least one working region of the working table (working area) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region of the at least one working region. For example, Khatravath teaches to obtain from a camera an image about at least one working region of the working table (working point) and configured to capture an image about at least one working region (location) of the working table (working point) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region (location) of the at least one working region (location). Khatravath further teaches that performing such steps provides accurate tracking of intended items in real time (see at least Khatravath Figs. 1-5 and paragraphs 6-8, 20-21, 27, 35, 45, 48). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vasanth to perform the steps of obtain from a camera an image about at least one working region of the working table (working point) and configured to capture an image about at least one working region (location) of the working table (working point) in real time and analyze the obtained image in response to the query request to recognize a current state of each working region (location) of the at least one working region (location) as taught by Khatravath in order to provide accurate tracking of intended items in real time.
Vasanth in view of Khatravath does not explicitly teaches (bolded and italic recitations above) as to the camera are above the working table (working point). However, it is known in the art before the effective filing date of the claimed invention to have the camera above the item that the camera is acquiring the images. For example, Morency teaches to have camera (19) is located above the work table. Morency further teaches that having such location of the camera provides accurate acquisition of image of the working table (see at least Morency Fig.1 and paragraphs 27-28 and 61-63). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vasanth in view of Khatravath to place the camera above the working table as taught by Morency in order to accurately acquisition of image of the working table.
In Reference to Claim 10
The scheduling method according to claim 9 (see rejection to claim 9 above), wherein analyzing the obtained image to recognize the current state of each working region of the at least one working region comprises: analyzing a current frame of the obtained image to recognize a corresponding number of each working region in the image and determine whether the state of each working region with the corresponding number is an available state or an occupied state (see at least Morency Fig.1 and paragraphs 27-28 and 61-63)
In Reference to Claim 11
The scheduling method according to claim 10 (see rejection to claim 10 above), wherein the available state comprises an empty cargo space state and an empty pallet state (see at least Morency Fig.1 and paragraphs 27-28 and 61-63).
In Reference to Claim 12
The scheduling method according to claim 9 (see rejection to claim 9 above), wherein selecting a corresponding working region (104) suitable for the at least one intelligent mobile robot (122) from the at least one working region (100) based on the query request and the recognized current state of each working region (100) comprises: determining a current task of the at least one intelligent mobile robot (122) at least according to the query request of the at least one intelligent mobile robot (122), and selecting from the at least one working region (100) a suitable working region matching the current task of the at least one intelligent mobile robot (122) based on the current task of the at least one intelligent mobile robot (122) and the recognized current state of each working region (100) (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 13
The scheduling method according to claim 12 (see rejection to claim 12 above), further comprising:
when it is determined that the current task of the intelligent mobile robot (122) is to transport cargo, send the scheduling instruction to instruct the intelligent mobile robot (122) to move to a working region in the empty cargo space state; and/or when it is determined that the current task of the intelligent mobile robot is to pick up an empty pallet, send the scheduling instruction to instruct the intelligent mobile robot to move to a working region in the empty pallet state (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 14
The scheduling method according to claim 9 (see rejection to claim 9 above), wherein the at least one intelligent mobile robot is configured to: send the query request in a case of the intelligent mobile robot (122) at a predetermined distance from the working table, when the current task of the intelligent mobile robot (122) is to transport cargo; and/or immediately send the query request in a case when the current task of the intelligent mobile robot (122) is to pick up an empty pallet (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 15
The scheduling method according to claim 12 (see rejection to claim 12 above), further comprising:
in the case when the current task of the intelligent mobile robot (122) is to transport cargo, instruct the intelligent mobile robot (122) to wait at a predetermined position and regularly analyze the image of the working table (working location) to monitor the state of the at least one working region if it is determined according to the recognized current state of each working region that there is currently no working region in the empty cargo space state, and instruct the intelligent mobile robot (122) to move to a working region in the empty cargo space state if a working region in the empty cargo space state is found; and/or in the case when the current task of the intelligent mobile robot (122) is to pick up an empty pallet, instruct the intelligent mobile robot to move to an empty-pallet storage region to pick up an empty pallet if it is determined according to the recognized current state of each working region that there is currently no working region in the empty pallet state (see at least Vasanth Figs. 1-5 and column 3 lines 62-67, column 4 lines 1-11 and column 22 lines 6-19 and 64-67, column 23 lines 1-26).
In Reference to Claim 16
A non-transitory computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, cause the scheduling method according to claim 9 to be implemented (see rejection to claim 9 above).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-5 and 7-16 have been considered but are moot because the new ground of rejection does not rely on all references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Pub No. US 2020/0167734 A1 to Hoofard et. al. (Hoofard) teaches a camera for analyzing that work table as to determining the status of work table.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON DONGPA LEE whose telephone number is (571)270-3525. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRANDON D LEE/Primary Examiner, Art Unit 3662 March 6, 2026