Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
The following claims have been rejected or allowed for the following reasons:
Claim(s) 1-7 is rejected under 35 USC § 103
Claim(s) 7 is rejected under 35 USC § 101
Claims 1, 6, 7 are rejected under 35 USC § 112 (b)
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2022-088764, filed on 5/31/22.
Information Disclosure Statement
The information disclosure statement/statements (IDS) were filed on 10/28/25 and 10/21/24. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim limitations of a “data collection unit”, “analysis unit”, “visualization control unit” and “specific display unit” as presented in the current claims invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. These limitations are:
Claim 1: “data collection unit”, “analysis unit”, “visualization control unit” and “specific display unit”
Claim 6: “specific display unit”
Claim 7: “specific display unit”
Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 1, 6 and 7 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, because the specification, does not reasonably provide enablement for “data collection unit”, “analysis unit”, “visualization control unit” and “specific display unit”. The specification does not enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make or construct any of these units in the manor that would be required to reproduce this invention. Therefore, the invention is commensurate in scope with these claims.
Claim Rejections - 35 USC § 101
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims contain the limitation of "an information processing program". The Federal Circuit has held that a product claim to an intangible collection of information, even if created by human effort, does not fall within any statutory category. (See Digitech, 758 F.3d at 1350). Review of the specification, and in particular paragraph [0011] of the filed specification, provides embodiments in which the storage medium may be interpreted as a computer program and thus the claimed limitations are directed towards non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3 and 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Ahn (US 20210283773 A1), in further view of Akiona (US 20210349473 A1), in further view of Kumar (US 20210375080 A1).
Regarding claim 1 Ahn teaches An information processing device comprising: a data collection unit that collects collection data including a route plan planned (Ahn [0020] reads “The processor may control the communication interface to transmit, to the terminal device, cleaning result information that includes only cleaning paths of each of a space in which cleaning is performed from among the plurality of spaces.”);
for a plurality of autonomous mobile robots, (Ahn [0054] reads “In FIG. 1, it is described that one robot cleaner and one terminal device are connected in the cleaning system 300, but in the cleaning system 300, there may be a plurality of robot cleaners which simultaneously operate and each robot cleaner can be connected to a plurality of terminal devices. In addition, one terminal device may be connected to a plurality of robot cleaners.”);
a job operating state indicating assignment of a robot operation to a job, (Ahn [0064] reads “The display 120 may display information on an operation state of the robot cleaner 100 (whether the mode is a cleaning mode or an idle mode), information on progress of cleaning (for example, cleaning progress time, current cleaning mode (for example, suction intensity), battery information, whether the battery is charged, whether the dust container is full of dust, and an error state (liquid contact state) and the like. If an error is detected, the display 120 may display the detected error.”);
a movement trajectory of each of the robots, (Ahn [0015] reads “The cleaning result information may include a plurality of cleaning paths which are partitioned by a plurality of spaces, and the processor may generate a cleaning area image of the robot cleaner by a plurality of spaces based on the plurality of cleaning paths.”
and an operating state of each of the robots; (Ahn [0064] reads “The display 120 may display information on an operation state of the robot cleaner 100 (whether the mode is a cleaning mode or an idle mode), information on progress of cleaning (for example, cleaning progress time, current cleaning mode (for example, suction intensity), battery information, whether the battery is charged, whether the dust container is full of dust, and an error state (liquid contact state) and the like. If an error is detected, the display 120 may display the detected error.”);
an analysis unit that uses the collection data to compute evaluation index values of a result-system including a job required time, (Ahn [0099] reads “Also, the display 220 can display the time required for cleaning and whether an error has occurred when the cleaning result information is displayed. If there is a history of sucking an object or there is a non-cleaned area, this information can be displayed together.”);
Ahn does not teach a job utilization ratio, and a robot utilization ratio, and evaluation index values of a cause-system corresponding to each route for the route plan; and a visualization control unit that controls so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values.
Akiona in analogous art, teaches a job utilization ratio, and a robot utilization ratio, and evaluation index values of a cause-system corresponding to each route for the route plan; (Akiona [0062] reads “The robot controller 340 can compute various performance metrics based on the recorded data 326. For example, the robot controller 340 can compute the quantity of tasks performed by each agent, the total quantity of tasks performed by all agents during the simulation, the utilization of each agent, the aggregate utilization of all agents, the amount of time each agent spent waiting for conditions to be met for each task or overall among all tasks, the amount of delay caused by congestion at each node, the total amount of delay caused by congestion across all nodes, the average amount of time for each task to be performed, and/or other appropriate performance metrics.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn with that of Akiona to include more metrics related to the efficiency that the robotic system is operating at. This would allow for a user of the facility to get a better understanding on where their system can be improved. (Akiona [0005] reads “A fleet or site simulation tool can leverage discreet event simulation (DES). In such a simulation, each task that occurs is treated as an event with a set duration. When the timer for that event ends, the actor, e.g., robot or human, is free to begin another task. Although such an approach is highly scalable and makes it easy to model individual tasks, it can be overly simplistic for modeling complex, dynamic systems such as mobile robots and people that share a facility with the robots. For example, a DES approach may always indicate that tasks are performed faster with the addition of more robots, ignoring the effects of bottlenecks and gridlock caused by multiple robots and/or other actors occupying the same area. In addition, such an approach does not model coexistence-based interactions such as congestion and suffers from difficulties in capturing interactions and dependencies between multiple actors.”);
Ahn/Akiona does not teach and a visualization control unit that controls so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values.
Kumar in analogous art, teaches and a visualization control unit that controls so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values. (Kumar [0170] reads “By way of example, a feature app in fleet management software (e.g., running on the platform 114—FIG. 1, running as a program on the control module 306 on a materials handling vehicle—FIG. 3, combination thereof, etc.) collects electronic vehicle records (e.g., see 402—FIG. 4). The electronic vehicle records can include feature specific (e.g., travel related) information that is collected from a corresponding materials handling vehicle. Example travel information can include overall travel distance per logged operator out of the total travel remote travel responsive to remote control and manually driven distance. The feature app then uses these two data sets to generate a usage percentage. For instance, when run by a server, the platform 114 can compute a usage percentage for every operator. For instance, an example usage can be established as UsageFeature=dremote/dtotal.” And [0172] reads “The feature app then compares every individual operator's feature usage percentage to the location's feature usage target and also calculates average usage values for groups of operators (e.g. teams or shifts).” It would be appreciated by one with ordinary skill in the art that fleet management software that is used to compare human working in warehouses could easily be adapted to measure the same comparison in a robotic system.);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn/Akiona with that of Kumar to include a method for displaying system information such that a user can easily compare efficiency of different robots and tasks. This would allow for a manager of a facility to have a better understanding of the efficiency of their facility. (Kumar [0003] reads “Materials handling vehicles are commonly used for picking stock in warehouses and distribution centers. Such vehicles typically include a power unit and a load handling assembly, which may include load carrying forks. The vehicle also has control structures for controlling operation and movement of the vehicle. Moreover, wireless strategies are deployed by various enterprises to improve the efficiency and accuracy of operations.”);
Regarding claim 2 Ahn/Akiona/Kumar teaches The information processing device of claim 1, wherein, according to a selection of display mode input from a user, the visualization control unit either displays so as to enable comparison of the result-system evaluation index values for a transport system as a whole or for each robot, or displays so as to enable comparison with the cause-system evaluation index values being indicated for each route of the route plan. (Kumar [0170] reads “By way of example, a feature app in fleet management software (e.g., running on the platform 114—FIG. 1, running as a program on the control module 306 on a materials handling vehicle—FIG. 3, combination thereof, etc.) collects electronic vehicle records (e.g., see 402—FIG. 4). The electronic vehicle records can include feature specific (e.g., travel related) information that is collected from a corresponding materials handling vehicle. Example travel information can include overall travel distance per logged operator out of the total travel remote travel responsive to remote control and manually driven distance. The feature app then uses these two data sets to generate a usage percentage. For instance, when run by a server, the platform 114 can compute a usage percentage for every operator. For instance, an example usage can be established as UsageFeature=dremote/dtotal.” And [0172] reads “The feature app then compares every individual operator's feature usage percentage to the location's feature usage target and also calculates average usage values for groups of operators (e.g. teams or shifts).” It would be appreciated by one with ordinary skill in the art that fleet management software that is used to compare human working in warehouses could easily be adapted to measure the same comparison in a robotic system.);
Regarding claim 3 Ahn/Akiona/Kumar teaches The information processing device of claim 1, wherein: the analysis unit computes an average movement speed for each route in the route plan as the cause-system evaluation index values; (Akiona [0062] reads “The robot controller 340 can compute various performance metrics based on the recorded data 326. For example, the robot controller 340 can compute the quantity of tasks performed by each agent, the total quantity of tasks performed by all agents during the simulation, the utilization of each agent, the aggregate utilization of all agents, the amount of time each agent spent waiting for conditions to be met for each task or overall among all tasks, the amount of delay caused by congestion at each node, the total amount of delay caused by congestion across all nodes, the average amount of time for each task to be performed, and/or other appropriate performance metrics.” It would be appreciated by one with ordinary skill in the art that average speed is a commonly used and appropriate metric in this field.);
and the visualization control unit displays the cause-system display mode as a graph map so as to enable comparison of the average movement speed for each of the routes. (Akiona [0012] reads “The simulation techniques can use a graph that represents the physical area and that includes area nodes that represent the various regions of the area and terminal nodes that represent the location at which tasks are performed. The simulation techniques can also use agents that represent the various actors that perform tasks or otherwise move about the area. The agent for an actor, e.g., robot or person, can include a state machine and model that defines how the actor traverses the area and performs tasks that is used in the simulation to determine/adjust the durations of traversing area nodes and performing tasks at terminal nodes.”);
Regarding claim 5 Ahn/Akiona/Kumar teaches The information processing device of claim 1, wherein: in relation to the result-system, the job required time is taken as being a time from a job being assigned to the robot until job completion; the job utilization ratio is taken as being a proportion of time when a job is being executed out of a time from job issue to job completion; and the robot utilization ratio is taken as being a proportion of a total time when the robot is job processing with respect to operating time of the robot; (Akiona [0062] reads “The robot controller 340 can compute various performance metrics based on the recorded data 326. For example, the robot controller 340 can compute the quantity of tasks performed by each agent, the total quantity of tasks performed by all agents during the simulation, the utilization of each agent, the aggregate utilization of all agents, the amount of time each agent spent waiting for conditions to be met for each task or overall among all tasks, the amount of delay caused by congestion at each node, the total amount of delay caused by congestion across all nodes, the average amount of time for each task to be performed, and/or other appropriate performance metrics.” It would be appreciated by one with ordinary skill in the art that for each relevant statistic given as an output of the system there would be a corresponding computation associated with it.);
and the visualization control unit visualizes a status of an operating state of a transport system by a graph comparing the robot utilization ratio and the job utilization ratio in the transport system as a whole, a graph comparing a number of jobs issued to each robot, and a graph comparing the robot utilization ratio and the job utilization ratio for each robot. (Kumar [0170] reads “By way of example, a feature app in fleet management software (e.g., running on the platform 114—FIG. 1, running as a program on the control module 306 on a materials handling vehicle—FIG. 3, combination thereof, etc.) collects electronic vehicle records (e.g., see 402—FIG. 4). The electronic vehicle records can include feature specific (e.g., travel related) information that is collected from a corresponding materials handling vehicle. Example travel information can include overall travel distance per logged operator out of the total travel remote travel responsive to remote control and manually driven distance. The feature app then uses these two data sets to generate a usage percentage. For instance, when run by a server, the platform 114 can compute a usage percentage for every operator. For instance, an example usage can be established as UsageFeature=dremote/dtotal.” And [0172] reads “The feature app then compares every individual operator's feature usage percentage to the location's feature usage target and also calculates average usage values for groups of operators (e.g. teams or shifts).” It would be appreciated by one with ordinary skill in the art that fleet management software that is used to compare human working in warehouses could easily be adapted to measure the same comparison in a robotic system.);
Regarding claim 6 Ahn teaches An information processing method executed by a computer and comprising: collecting collection data including a route plan planned (Ahn [0020] reads “The processor may control the communication interface to transmit, to the terminal device, cleaning result information that includes only cleaning paths of each of a space in which cleaning is performed from among the plurality of spaces.”);
for a plurality of autonomous mobile robots, (Ahn [0054] reads “In FIG. 1, it is described that one robot cleaner and one terminal device are connected in the cleaning system 300, but in the cleaning system 300, there may be a plurality of robot cleaners which simultaneously operate and each robot cleaner can be connected to a plurality of terminal devices. In addition, one terminal device may be connected to a plurality of robot cleaners.”);
a job operating state indicating assignment of a robot operation to a job, (Ahn [0064] reads “The display 120 may display information on an operation state of the robot cleaner 100 (whether the mode is a cleaning mode or an idle mode), information on progress of cleaning (for example, cleaning progress time, current cleaning mode (for example, suction intensity), battery information, whether the battery is charged, whether the dust container is full of dust, and an error state (liquid contact state) and the like. If an error is detected, the display 120 may display the detected error.”);
a movement trajectory of each of the robots, (Ahn [0015] reads “The cleaning result information may include a plurality of cleaning paths which are partitioned by a plurality of spaces, and the processor may generate a cleaning area image of the robot cleaner by a plurality of spaces based on the plurality of cleaning paths.”
and an operating state of each of the robots; (Ahn [0064] reads “The display 120 may display information on an operation state of the robot cleaner 100 (whether the mode is a cleaning mode or an idle mode), information on progress of cleaning (for example, cleaning progress time, current cleaning mode (for example, suction intensity), battery information, whether the battery is charged, whether the dust container is full of dust, and an error state (liquid contact state) and the like. If an error is detected, the display 120 may display the detected error.”);
using the collection data to compute evaluation index values of a result-system including a job required time, (Ahn [0099] reads “Also, the display 220 can display the time required for cleaning and whether an error has occurred when the cleaning result information is displayed. If there is a history of sucking an object or there is a non-cleaned area, this information can be displayed together.”);
Ahn does not teach a job utilization ratio, and a robot utilization ratio, and evaluation index values of a cause-system corresponding to each route for the route plan; and controlling so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values.
Akiona in analogous art, teaches a job utilization ratio, and a robot utilization ratio, and evaluation index values of a cause-system corresponding to each route for the route plan; (Akiona [0062] reads “The robot controller 340 can compute various performance metrics based on the recorded data 326. For example, the robot controller 340 can compute the quantity of tasks performed by each agent, the total quantity of tasks performed by all agents during the simulation, the utilization of each agent, the aggregate utilization of all agents, the amount of time each agent spent waiting for conditions to be met for each task or overall among all tasks, the amount of delay caused by congestion at each node, the total amount of delay caused by congestion across all nodes, the average amount of time for each task to be performed, and/or other appropriate performance metrics.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn with that of Akiona to include more metrics related to the efficiency that the robotic system is operating at. This would allow for a user of the facility to get a better understanding on where their system can be improved. (Akiona [0005] reads “A fleet or site simulation tool can leverage discreet event simulation (DES). In such a simulation, each task that occurs is treated as an event with a set duration. When the timer for that event ends, the actor, e.g., robot or human, is free to begin another task. Although such an approach is highly scalable and makes it easy to model individual tasks, it can be overly simplistic for modeling complex, dynamic systems such as mobile robots and people that share a facility with the robots. For example, a DES approach may always indicate that tasks are performed faster with the addition of more robots, ignoring the effects of bottlenecks and gridlock caused by multiple robots and/or other actors occupying the same area. In addition, such an approach does not model coexistence-based interactions such as congestion and suffers from difficulties in capturing interactions and dependencies between multiple actors.”);
Ahn/Akiona does not teach and controlling so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values.
Kumar in analogous art, teaches and controlling so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values. (Kumar [0170] reads “By way of example, a feature app in fleet management software (e.g., running on the platform 114—FIG. 1, running as a program on the control module 306 on a materials handling vehicle—FIG. 3, combination thereof, etc.) collects electronic vehicle records (e.g., see 402—FIG. 4). The electronic vehicle records can include feature specific (e.g., travel related) information that is collected from a corresponding materials handling vehicle. Example travel information can include overall travel distance per logged operator out of the total travel remote travel responsive to remote control and manually driven distance. The feature app then uses these two data sets to generate a usage percentage. For instance, when run by a server, the platform 114 can compute a usage percentage for every operator. For instance, an example usage can be established as UsageFeature=dremote/dtotal.” And [0172] reads “The feature app then compares every individual operator's feature usage percentage to the location's feature usage target and also calculates average usage values for groups of operators (e.g. teams or shifts).” It would be appreciated by one with ordinary skill in the art that fleet management software that is used to compare human working in warehouses could easily be adapted to measure the same comparison in a robotic system.);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn/Akiona with that of Kumar to include a method for displaying system information such that a user can easily compare efficiency of different robots and tasks. This would allow for a manager of a facility to have a better understanding of the efficiency of their facility. (Kumar [0003] reads “Materials handling vehicles are commonly used for picking stock in warehouses and distribution centers. Such vehicles typically include a power unit and a load handling assembly, which may include load carrying forks. The vehicle also has control structures for controlling operation and movement of the vehicle. Moreover, wireless strategies are deployed by various enterprises to improve the efficiency and accuracy of operations.”);
Regarding claim 7 Ahn teaches An information processing program that is executable by a computer to perform processing, the processing comprising: collecting collection data including a route plan planned (Ahn [0020] reads “The processor may control the communication interface to transmit, to the terminal device, cleaning result information that includes only cleaning paths of each of a space in which cleaning is performed from among the plurality of spaces.”);
for a plurality of autonomous mobile robots, (Ahn [0054] reads “In FIG. 1, it is described that one robot cleaner and one terminal device are connected in the cleaning system 300, but in the cleaning system 300, there may be a plurality of robot cleaners which simultaneously operate and each robot cleaner can be connected to a plurality of terminal devices. In addition, one terminal device may be connected to a plurality of robot cleaners.”);
a job operating state indicating assignment of a robot operation to a job, a movement trajectory of each of the robots, (Ahn [0064] reads “The display 120 may display information on an operation state of the robot cleaner 100 (whether the mode is a cleaning mode or an idle mode), information on progress of cleaning (for example, cleaning progress time, current cleaning mode (for example, suction intensity), battery information, whether the battery is charged, whether the dust container is full of dust, and an error state (liquid contact state) and the like. If an error is detected, the display 120 may display the detected error.”);
and an operating state of each of the robots; (Ahn [0064] reads “The display 120 may display information on an operation state of the robot cleaner 100 (whether the mode is a cleaning mode or an idle mode), information on progress of cleaning (for example, cleaning progress time, current cleaning mode (for example, suction intensity), battery information, whether the battery is charged, whether the dust container is full of dust, and an error state (liquid contact state) and the like. If an error is detected, the display 120 may display the detected error.”);
using the collection data to compute evaluation index values of a result-system including a job required time, (Ahn [0099] reads “Also, the display 220 can display the time required for cleaning and whether an error has occurred when the cleaning result information is displayed. If there is a history of sucking an object or there is a non-cleaned area, this information can be displayed together.”);
Ahn does not teach a job utilization ratio, and a robot utilization ratio, and evaluation index values of a cause-system corresponding to each route for the route plan; and controlling so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values.
Akiona in analogous art, teaches a job utilization ratio, and a robot utilization ratio, and evaluation index values of a cause-system corresponding to each route for the route plan; (Akiona [0062] reads “The robot controller 340 can compute various performance metrics based on the recorded data 326. For example, the robot controller 340 can compute the quantity of tasks performed by each agent, the total quantity of tasks performed by all agents during the simulation, the utilization of each agent, the aggregate utilization of all agents, the amount of time each agent spent waiting for conditions to be met for each task or overall among all tasks, the amount of delay caused by congestion at each node, the total amount of delay caused by congestion across all nodes, the average amount of time for each task to be performed, and/or other appropriate performance metrics.”);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn with that of Akiona to include more metrics related to the efficiency that the robotic system is operating at. This would allow for a user of the facility to get a better understanding on where their system can be improved. (Akiona [0005] reads “A fleet or site simulation tool can leverage discreet event simulation (DES). In such a simulation, each task that occurs is treated as an event with a set duration. When the timer for that event ends, the actor, e.g., robot or human, is free to begin another task. Although such an approach is highly scalable and makes it easy to model individual tasks, it can be overly simplistic for modeling complex, dynamic systems such as mobile robots and people that share a facility with the robots. For example, a DES approach may always indicate that tasks are performed faster with the addition of more robots, ignoring the effects of bottlenecks and gridlock caused by multiple robots and/or other actors occupying the same area. In addition, such an approach does not model coexistence-based interactions such as congestion and suffers from difficulties in capturing interactions and dependencies between multiple actors.”);
Ahn/Akiona does not teach and controlling so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values.
Kumar in analogous art, teaches and controlling so as to respectively display, on a specific display unit, a display mode enabling comparison of the result-system evaluation index values, and a display mode enabling comparison of the cause-system evaluation index values. (Kumar [0170] reads “By way of example, a feature app in fleet management software (e.g., running on the platform 114—FIG. 1, running as a program on the control module 306 on a materials handling vehicle—FIG. 3, combination thereof, etc.) collects electronic vehicle records (e.g., see 402—FIG. 4). The electronic vehicle records can include feature specific (e.g., travel related) information that is collected from a corresponding materials handling vehicle. Example travel information can include overall travel distance per logged operator out of the total travel remote travel responsive to remote control and manually driven distance. The feature app then uses these two data sets to generate a usage percentage. For instance, when run by a server, the platform 114 can compute a usage percentage for every operator. For instance, an example usage can be established as UsageFeature=dremote/dtotal.” And [0172] reads “The feature app then compares every individual operator's feature usage percentage to the location's feature usage target and also calculates average usage values for groups of operators (e.g. teams or shifts).” It would be appreciated by one with ordinary skill in the art that fleet management software that is used to compare human working in warehouses could easily be adapted to measure the same comparison in a robotic system.);
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn/Akiona with that of Kumar to include a method for displaying system information such that a user can easily compare efficiency of different robots and tasks. This would allow for a manager of a facility to have a better understanding of the efficiency of their facility. (Kumar [0003] reads “Materials handling vehicles are commonly used for picking stock in warehouses and distribution centers. Such vehicles typically include a power unit and a load handling assembly, which may include load carrying forks. The vehicle also has control structures for controlling operation and movement of the vehicle. Moreover, wireless strategies are deployed by various enterprises to improve the efficiency and accuracy of operations.”);
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Ahn/Akiona/Kumar, in further view of Merschformann (NPL | RAWSim-O: A Simulation Framework for Robotic Mobile Fulfillment Systems | 2018).
Regarding claim 4 Ahn/Akiona/Kumar teaches The information processing device of claim 3.
Ahn/Akiona/Kumar does not teach wherein the visualization control unit displays the average movement speed on the routes as a heat map so as to enable comparison.
Merschformann in analogous art, teaches wherein the visualization control unit displays the average movement speed on the routes as a heat map so as to enable comparison. (Merschformann figure 5 shows a visualization of a robots movement and other statistics as displayed as a heat map. It would be appreciated by one with ordinary skill in the art that a heat map as an output of a system could be used to output any type of spatially related data, such as speed over a specific area.);
PNG
media_image1.png
450
328
media_image1.png
Greyscale
Merschformann figure 5
It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Ahn/Akiona/Kumar with that of Merschformann to include a method for graphically viewing robot speed through the environment. This would allow the operator to better understand the environment that its robots are operating in. (Merschformann introduction reads “Due to the rise of e-commerce, the traditional manual picker-to-parts warehousing systems no longer work efficiently, and new types of warehousing systems are required, such as automated parts-to-picker systems. For details about the classification of different warehousing systems we refer to [8]. This paper studies one of the parts-to-picker systems, a so-called Robotic Mobile Fulfillment Systems (RMFS), such as the Kiva System ([4], nowadays Amazon Robotics), the GreyOrange Butler or the Swisslog CarryPick. The approach of an RMFS completely eliminates the need for travel within the warehouse, which accounts for approximately 50% of a picker’s time in manual warehouse operations according to [14]. [15] indicates that the Kiva System increases the productivity two to three times, in contrast to a traditional manual picker-to-parts system. Compared to other kinds of warehousing systems, the biggest advantages of an RMFS are its flexibility as a result of having virtually no fixed installations, the scalability due to accessing the inventory in a parallel manner, and the reliability due to the use of only homogeneous components, i.e., redundant components may compensate for faulty ones (see [5] and [16])”);
Other references not Cited
Throughout examination other references were found that could read onto the prior art. Though these references were not used in this examination they could be used in future examination and could read on the contents of the current disclosure. These references are, Li (US 20220148345 A1); Backof (US 20150242772 A1); Deng (US 20210116928 A1).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN MARTIN O'MALLEY whose telephone number is (571)272-6228. The examiner can normally be reached Mon - Fri 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270 - 5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN MARTIN O'MALLEY/Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658