DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed on 5/30/2024 and the response and amendments filed 12/11/2025.
Claims 1-9 have been amended.
Claim 10 has been added.
No claims have been cancelled.
Claims 1-10 are currently pending and have been examined.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement(s) (IDS(s)) submitted on 5/30/2024 has been received and considered.
Response to Amendment
Applicant’s amendments to the Title and Claims have overcome each and every objection and 101 and 112(b) rejection previously set forth in the Non-Final Office Action mailed 9/11/2025, and have furthermore removed the language invoking the 112(f) interpretations previously used.
Response to Arguments
Applicant’s arguments, see pages 11-13, filed 12/11/2025, with respect to the rejection(s) of claim(s) 1-9 under 35 USC 102 and 103 have been fully considered and are persuasive regarding the prior art not teaching the newly amended element of “wherein a position that is a certain distance from an area where traveling is not possible is set to the route.” Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yoon (KR 20230056270) and Yamaguchi (US 20220397904).
Regarding the argument that the prior art does not teach the elements of the newly added Claim 10, the examiner disagrees based on the teachings of Yoon Pg 27 ¶ 6, as is more fully described in the 103 rejection below.
Claim Rejections - 35 USC § 103
Claim(s) 1, 2, 4, 5, and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al (KR 20230056270, hereinafter “Yoon,” all citations and excerpts taken from the attached machine translation) in view of Yamaguchi (US 20220397904, hereinafter “Yamaguchi”).
Regarding Claim 1, Yoon teaches:
An information processing device (Yoon Pg 31 ¶ 10 “Meanwhile, the present invention described above is executed by one or more processes in a computer and can be implemented as a program that can be stored in a computer-readable medium,”)
that causes a movable apparatus to move by setting a route on which the movable apparatus moves, (Yoon Pg 18 ¶ 8 “The cloud server 20 may specify a destination where the robot's mission is to be performed, and set a movement path for the robot to reach the destination. When a movement route is set by the cloud server 20, the robot R may be controlled to move to a corresponding destination in order to perform a mission,”)
the information processing device comprising: at least one memory storing instructions; (Yoon Pg 31 ¶ 11 “Furthermore, the present invention described above can be implemented as computer readable codes or instructions in a medium on which a program is recorded. That is, various control methods according to the present invention may be integrated or individually provided in the form of a program,”)
and at least one processor (Yoon Pg 32 ¶ 3 “Furthermore, in the present invention, the above-described computer is an electronic device equipped with a processor, that is, a CPU (Central Processing Unit), and there is no particular limitation on its type,”)
that, upon execution of the instructions, causes the information processing device to: acquire route information about the route; (Yoon Pg 18 ¶ 10 “The cloud server 20 may generate a movement path of a robot to perform a mission based on a map (or map information) corresponding to the indoor space 10 of the building 1000,”)
acquire environment information on the route for moving the movable apparatus; (Yoon Pg 21 ¶ 4 lines 3-6 “At this time, the camera is configured to capture (or sense) an image of the space 10, that is, an image of the robot R's surroundings. Hereinafter, for convenience of description, an image acquired using a camera provided in the robot R will be named “robot image”,”)
estimate a self-position and orientation of the movable apparatus (Yoon Pg 21 ¶ 4 lines 1-3 “the cloud server 20 according to the present invention receives an image of the space 10 using a camera (not shown) provided in the robot R, and receives It is made to perform visual localization to estimate the position of the robot from the image,”)
and store a result of the estimation in the environment information; (Yoon Pg 20 ¶ 10 lines 5 – Pg 21 ¶ 1 line 1 “The location information of the robot being monitored can be stored on a database in which the information of the robot is stored, and the location information of the robot changes over time. can be continuously updated,”)
calculate an accuracy for estimating the self-position and orientation of the movable apparatus on the basis of the environment information; (Yoon Pg 21 ¶ 6 “The cloud server 20 compares the robot image 910 with the map information stored in the database, and as shown in (b) of FIG. 9, the location information corresponding to the current location of the robot R (eg, “3rd floor A area (3, 1, 1)”) can be extracted,” and Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” comparison of a difference between first and second sensor information being analogous to the calculation of accuracy, and the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy)
and update the route information of the movable apparatus according to the calculated accuracy; (Yoon Pg 28 ¶ 6 “Meanwhile, as the map update is performed, the server may determine whether to modify the driving route corresponding to the specific robot. Specifically, as a result of updating the map, when it is determined that an obstacle exists on the driving route, the server may correct the driving route,” and Pg 29 ¶ 4 “When updating at least part of the map, the server determines whether an update has been made for at least one area corresponding to a plurality of previously transmitted driving routes to the specific robot among the entire areas of the map, and responds to the driving route. When at least one region is updated, the travel path of the specific robot may be changed,” the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy as described above, thus changing the driving route due to updating the map is equivalent to changing the driving route due to the accuracy determination)
and cause the movable apparatus to move along a route based on the updated route information, […] (Yoon Pg 28 ¶ 3 “Finally, a step of performing control related to the driving of the specific robot by using the updated map so that the at least one robot and the other specific robot travels in the space (S130),”)
Yoon does not teach:
[…] wherein a position that is a certain distance from an area where traveling is not possible is set to the route.
Within the same field of endeavor as Yoon, Yamaguchi teaches:
[…] wherein a position that is a certain distance from an area where traveling is not possible is set to the route. (Yamaguchi ¶ 29 “Specifically, for example, a data processing unit that executes control to change the flexible virtual bumper for maintaining the space between the mobile object and the obstacle to be equal to or larger than the predetermined distance, and a drive unit that drives the mobile object in such a way that no obstacle enters the flexible virtual bumper are included. The data processing unit executes control to change the flexible virtual bumper at least either in size or shape. For each one of a plurality of travel route candidates for the mobile object, the data processing unit executes a simulation of changing the bumper size in such a way that no obstacle enters the flexible virtual bumper, and selects a safe travel route,” teaching setting a route (select a safe travel route) with positions a certain distance (flexible virtual bumper for maintaining the space equal to or larger than a predetermined distance))
Yoon and Yamaguchi are considered analogous because they both relate to mobile object route planning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the driving route correction based on an updated map of Yoon with the simple addition of Yamaguchi’s virtual bumper to determine a safe route with a separation between the mobile object and an obstacle greater than or equal to a predetermined distance. This modification would be made with a reasonable expectation of success as motivated by increasing travelable regions and routes and enabling safe travel (Yamaguchi ¶ 0010).
Regarding Claim 2, the combination of Yoon and Yamaguchi teaches the elements of Claim 1 as described above. Yoon further teaches:
wherein execution of the stored instructions by the processor further causes the information processing device to calculate the accuracy using an image included in the environment information. (Yoon Pg 21 ¶ 4 lines 1-3 “the cloud server 20 according to the present invention receives an image of the space 10 using a camera (not shown) provided in the robot R, and receives It is made to perform visual localization to estimate the position of the robot from the image,” and Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” the localization being based on image sensor data and the comparison of a difference between first and second sensor information being analogous to the calculation of accuracy)
Regarding Claim 4, the combination of Yoon and Yamaguchi teaches the elements of Claim 1 as described above. Yoon further teaches:
wherein execution of the stored instructions by the processor further causes the information processing device to perform the route setting unit performs the update based on the accuracy calculated for each one or more areas included in the acquired route information. (Yoon Pg 28 ¶ 5 “When a mission is assigned to a specific robot, the server sets a driving route for the specific robot to travel. The server sets the priority of the at least one zone to other zones so that a part corresponding to at least one zone corresponding to a driving route on which the specific robot will travel is first updated among the plurality of update zones on the map. can be set higher than that,” teaching that the route map is updated, which as explained above means the difference between first and second sensor data is compared against a threshold value, analogous to updating based on an accuracy calculation.)
Regarding Claim 5, the combination of Yoon and Yamaguchi teaches the elements of Claim 1 as described above. Yoon further teaches:
wherein execution of the stored instructions by the processor further causes the information processing device to calculate the accuracy used to estimate a self-position and orientation of the movable apparatus (Yoon Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” comparison of a difference between first and second sensor information being analogous to the calculation of accuracy, and the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy)
for each one or more areas which are not included in the acquired route information, (Yoon Pg 29 ¶ 7 lines 1-3 “Meanwhile, the server updates a part of the map based on the sensing information received from the fourth robot R4. Accordingly, although the information on the obstacle 1513 is updated, the obstacle 1513 is not located on the driving path of the first robot R1,” teaching updating map areas that are not on a current driving route)
and estimate one or more areas which are not included in the acquired route information as the route of the movable apparatus (Yoon Pg 29 ¶ 5 “For example, referring to FIG. 15 , the server transmits a driving route 1521 to the first robot R1 as a task is assigned to the first robot R1. Thereafter, the server updates a part of the map based on the sensing information received from the second robot R2. Accordingly, information on the obstacle 1511 located on the driving path 1521 of the first robot R1 is updated. The server generates a modified driving path 1522 based on updated information about the obstacle 1511 on the driving path 1521 of the first robot R1, and transmits the modified driving path 1522 to the first robot R1,” teaching setting a new route in an area not previously included in the route)
according to the calculated accuracy. (Yoon Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” comparison of a difference between first and second sensor information being analogous to the calculation of accuracy, and the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy)
Regarding Claim 8, Yoon teaches:
A control method for causing a movable apparatus to move (Yoon Pg 18 ¶ 8 “The cloud server 20 may specify a destination where the robot's mission is to be performed, and set a movement path for the robot to reach the destination. When a movement route is set by the cloud server 20, the robot R may be controlled to move to a corresponding destination in order to perform a mission,”)
by setting a route on which the movable apparatus moves, the method comprising: acquiring route information on the route; (Yoon Pg 18 ¶ 10 “The cloud server 20 may generate a movement path of a robot to perform a mission based on a map (or map information) corresponding to the indoor space 10 of the building 1000,”)
acquiring environment information on the route for moving the movable apparatus; (Yoon Pg 21 ¶ 4 lines 3-6 “At this time, the camera is configured to capture (or sense) an image of the space 10, that is, an image of the robot R's surroundings. Hereinafter, for convenience of description, an image acquired using a camera provided in the robot R will be named “robot image”,”)
estimating a self-position and orientation of the movable apparatus (Yoon Pg 21 ¶ 4 lines 1-3 “the cloud server 20 according to the present invention receives an image of the space 10 using a camera (not shown) provided in the robot R, and receives It is made to perform visual localization to estimate the position of the robot from the image,”)
and storing a result of the estimation in the environment information; (Yoon Pg 20 ¶ 10 lines 5 – Pg 21 ¶ 1 line 1 “The location information of the robot being monitored can be stored on a database in which the information of the robot is stored, and the location information of the robot changes over time. can be continuously updated,”)
calculating an accuracy for estimating the self-position and orientation of the movable apparatus on the basis of the environment information; (Yoon Pg 21 ¶ 6 “The cloud server 20 compares the robot image 910 with the map information stored in the database, and as shown in (b) of FIG. 9, the location information corresponding to the current location of the robot R (eg, “3rd floor A area (3, 1, 1)”) can be extracted,” and Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” comparison of a difference between first and second sensor information being analogous to the calculation of accuracy, and the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy)
and updating the route information of the movable apparatus according to the calculated accuracy; (Yoon Pg 28 ¶ 6 “Meanwhile, as the map update is performed, the server may determine whether to modify the driving route corresponding to the specific robot. Specifically, as a result of updating the map, when it is determined that an obstacle exists on the driving route, the server may correct the driving route,” and Pg 29 ¶ 4 “When updating at least part of the map, the server determines whether an update has been made for at least one area corresponding to a plurality of previously transmitted driving routes to the specific robot among the entire areas of the map, and responds to the driving route. When at least one region is updated, the travel path of the specific robot may be changed,” the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy as described above, thus changing the driving route due to updating the map is equivalent to changing the driving route due to the accuracy determination)
and causing the movable apparatus to move along a route based on the updated route information, […] (Yoon Pg 28 ¶ 3 “Finally, a step of performing control related to the driving of the specific robot by using the updated map so that the at least one robot and the other specific robot travels in the space (S130),”)
Yoon does not teach:
[…] wherein a position that is a certain distance from an area where traveling is not possible is set to the route.
Within the same field of endeavor as Yoon, Yamaguchi teaches:
[…] wherein a position that is a certain distance from an area where traveling is not possible is set to the route. (Yamaguchi ¶ 29 “Specifically, for example, a data processing unit that executes control to change the flexible virtual bumper for maintaining the space between the mobile object and the obstacle to be equal to or larger than the predetermined distance, and a drive unit that drives the mobile object in such a way that no obstacle enters the flexible virtual bumper are included. The data processing unit executes control to change the flexible virtual bumper at least either in size or shape. For each one of a plurality of travel route candidates for the mobile object, the data processing unit executes a simulation of changing the bumper size in such a way that no obstacle enters the flexible virtual bumper, and selects a safe travel route,” teaching setting a route (select a safe travel route) with positions a certain distance (flexible virtual bumper for maintaining the space equal to or larger than a predetermined distance))
Yoon and Yamaguchi are considered analogous because they both relate to mobile object route planning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the driving route correction based on an updated map of Yoon with the simple addition of Yamaguchi’s virtual bumper to determine a safe route with a separation between the mobile object and an obstacle greater than or equal to a predetermined distance. This modification would be made with a reasonable expectation of success as motivated by increasing travelable regions and routes and enabling safe travel (Yamaguchi ¶ 0010).
Regarding Claim 9, Yoon teaches:
A non-transitory computer-readable storage medium configured to store a computer program of an information processing device, (Yoon Pg 31 ¶ 11 “Furthermore, the present invention described above can be implemented as computer readable codes or instructions in a medium on which a program is recorded. That is, various control methods according to the present invention may be integrated or individually provided in the form of a program,”)
causing a computer to perform each step of a control method for causing a movable apparatus to move by setting a route on which the movable apparatus moves, (Yoon Pg 18 ¶ 8 “The cloud server 20 may specify a destination where the robot's mission is to be performed, and set a movement path for the robot to reach the destination. When a movement route is set by the cloud server 20, the robot R may be controlled to move to a corresponding destination in order to perform a mission,”)
the method comprising: acquiring route information on the route; (Yoon Pg 18 ¶ 10 “The cloud server 20 may generate a movement path of a robot to perform a mission based on a map (or map information) corresponding to the indoor space 10 of the building 1000,”)
acquiring environment information on the route for moving the movable apparatus; (Yoon Pg 21 ¶ 4 lines 3-6 “At this time, the camera is configured to capture (or sense) an image of the space 10, that is, an image of the robot R's surroundings. Hereinafter, for convenience of description, an image acquired using a camera provided in the robot R will be named “robot image”,”)
estimating a self-position and orientation of the movable apparatus (Yoon Pg 21 ¶ 4 lines 1-3 “the cloud server 20 according to the present invention receives an image of the space 10 using a camera (not shown) provided in the robot R, and receives It is made to perform visual localization to estimate the position of the robot from the image,”)
and storing a result of the estimation in the environment information; (Yoon Pg 20 ¶ 10 lines 5 – Pg 21 ¶ 1 line 1 “The location information of the robot being monitored can be stored on a database in which the information of the robot is stored, and the location information of the robot changes over time. can be continuously updated,”)
calculating an accuracy for estimating the self-position and orientation of the movable apparatus on the basis of the environment information; (Yoon Pg 21 ¶ 6 “The cloud server 20 compares the robot image 910 with the map information stored in the database, and as shown in (b) of FIG. 9, the location information corresponding to the current location of the robot R (eg, “3rd floor A area (3, 1, 1)”) can be extracted,” and Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” comparison of a difference between first and second sensor information being analogous to the calculation of accuracy, and the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy)
and updating the route information of the movable apparatus according to the calculated accuracy; (Yoon Pg 28 ¶ 6 “Meanwhile, as the map update is performed, the server may determine whether to modify the driving route corresponding to the specific robot. Specifically, as a result of updating the map, when it is determined that an obstacle exists on the driving route, the server may correct the driving route,” and Pg 29 ¶ 4 “When updating at least part of the map, the server determines whether an update has been made for at least one area corresponding to a plurality of previously transmitted driving routes to the specific robot among the entire areas of the map, and responds to the driving route. When at least one region is updated, the travel path of the specific robot may be changed,” the decision to update the map based on comparison of this difference to a reference value being a decision based on the accuracy as described above, thus changing the driving route due to updating the map is equivalent to changing the driving route due to the accuracy determination)
and causing the movable apparatus to move along a route based on the updated route information, […] (Yoon Pg 28 ¶ 3 “Finally, a step of performing control related to the driving of the specific robot by using the updated map so that the at least one robot and the other specific robot travels in the space (S130),”)
Yoon does not teach:
[…] wherein a position that is a certain distance from an area where traveling is not possible is set to the route.
Within the same field of endeavor as Yoon, Yamaguchi teaches:
[…] wherein a position that is a certain distance from an area where traveling is not possible is set to the route. (Yamaguchi ¶ 29 “Specifically, for example, a data processing unit that executes control to change the flexible virtual bumper for maintaining the space between the mobile object and the obstacle to be equal to or larger than the predetermined distance, and a drive unit that drives the mobile object in such a way that no obstacle enters the flexible virtual bumper are included. The data processing unit executes control to change the flexible virtual bumper at least either in size or shape. For each one of a plurality of travel route candidates for the mobile object, the data processing unit executes a simulation of changing the bumper size in such a way that no obstacle enters the flexible virtual bumper, and selects a safe travel route,” teaching setting a route (select a safe travel route) with positions a certain distance (flexible virtual bumper for maintaining the space equal to or larger than a predetermined distance))
Yoon and Yamaguchi are considered analogous because they both relate to mobile object route planning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the driving route correction based on an updated map of Yoon with the simple addition of Yamaguchi’s virtual bumper to determine a safe route with a separation between the mobile object and an obstacle greater than or equal to a predetermined distance. This modification would be made with a reasonable expectation of success as motivated by increasing travelable regions and routes and enabling safe travel (Yamaguchi ¶ 0010).
Regarding Claim 10, the combination of Yoon and Yamaguchi teaches the elements of Claim 5 as described above. Yoon further teaches:
wherein execution of the stored instructions by the processor further causes the information processing device to calculate the accuracy used to estimate the self-position and orientation of the movable apparatus based on the environmental information created in the past. (Yoon Pg 27 ¶ 6 “Meanwhile, the present invention may determine whether to update a map by using both new sensing information and previously received sensing information without performing an atonal map update based on new sensing information,” teaching basing the update decision on previously received sensing information (information created in the past))
Claims 3, 6, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Yamaguchi and Cui et al (US 20190375103, hereinafter “Cui”).
Regarding Claim 3, the combination of Yoon and Yamaguchi teaches the elements of Claim 1 as described above. Yoon further teaches:
wherein execution of the stored instructions by the processor further causes the information processing device to calculate the accuracy using […] the environment information. (Yoon Pg 21 ¶ 4 lines 1-3 “the cloud server 20 according to the present invention receives an image of the space 10 using a camera (not shown) provided in the robot R, and receives It is made to perform visual localization to estimate the position of the robot from the image,” and Pg 27 ¶ 7 – Pg 27 ¶ 8 “In one embodiment, the server receives first sensing information corresponding to a specific area among a plurality of areas constituting the space, […] When the second sensing information is received, the first and second sensing information may be compared. Then, based on the comparison result, the server determines whether to update the part corresponding to the specific area. As a result of the comparison, if the difference between the previously received first sensing information and the newly received second sensing information does not exceed the reference value, the server may not perform a map update based on the second sensing information,” the localization being based on image sensor data and the comparison of a difference between first and second sensor information being analogous to the calculation of accuracy)
Yoon does not teach:
[…] a shape included in […]
Within the same field of endeavor as Yoon, Cui teaches:
[…] using a shape included in the environment information. (Cui ¶ 0054 lines 6-18 “With the environment in which a cleaning robot is located as an example, for example, the objects identified according to the identification manner provided in step S110 include a television, an air conditioner, a chair, shoes, a ball, etc. Based on whether the identified target contour (or a frame representing the target) in the image and the identified border line have an overlapped part the chair, the shoes and the ball are determined to be placed on the ground, and then the relative position between the identified object placed on the ground and the mobile robot can be determined through performing step S120, such that the cleaning robot can avoid the object or clean the surrounding of the object enhancedly based on cleaning instructions during cleaning.,” teaching recognition of a shape (a contour) used for navigation, as applies to the accuracy calculation of Yoon)
Yoon and Cui are considered analogous because they both relate to autonomous vehicle navigation systems based on image recognition localization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the visual localization based on a robot image and difference comparison to a reference value of the sensor data of Yoon by further adding the contour recognition of Cui as the particular comparison element. This modification would be made with a reasonable expectation of success as motivated by combining prior art elements according to known methods to yield predictable results (MPEP 2143(I)(A)).
Regarding Claim 6, the combination of Yoon and Yamaguchi teaches the elements of Claim 1 as described above. Yoon further teaches:
wherein execution of the stored instructions by the processor further causes the information processing device to output […] to a display device. (Yoon Pg 14 ¶ 12 lines 1-3 “Next, the output unit 130 is a means for outputting at least one of visual, auditory and tactile information to a person or robot R in the building 1000, and includes a display unit 131,”)
Yoon does not teach:
[…] the updated route information […]
Within the same field of endeavor as Yoon, Cui teaches:
[…] output the updated route information to a display device. (Cui ¶ 0019 “In some embodiments, the input device comprises a network interface unit, the network interface unit is configured to acquire the instruction containing object type label from a terminal device which is connected with the mobile robot wirelessly, and/or send the movement route from the corresponding current position to the corresponding destination position to the terminal device, so as to display the movement route, the current position information and/or destination position information on the map displayed by the terminal device,”)
Yoon and Cui are considered analogous because they both relate to autonomous vehicle navigation systems based on image recognition localization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the output unit and display unit of Yoon by further adding the display of the movement route on the analogous terminal device of Cui. This modification would be made with a reasonable expectation of success as motivated by increasing user knowledge of the system and route.
Regarding Claim 7, the combination of Yoon and Yamaguchi teaches the elements of Claim 6 as described above. Yoon does not teach:
wherein the the display device displays a traveling direction and a turning direction of the movable apparatus.
Within the same field of endeavor as Yoon, Cui teaches:
wherein the the display device displays a traveling direction and a turning direction of the movable apparatus. (Cui ¶ 0019 lines 7-10“so as to display the movement route […] on the map displayed by the terminal device” the movement route including traveling and turning directions)
Yoon and Cui are considered analogous because they both relate to autonomous vehicle navigation systems based on image recognition localization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the output unit and display unit of Yoon by further adding the display of the movement route on the analogous terminal device of Cui. This modification would be made with a reasonable expectation of success as motivated by increasing user knowledge of the system and route.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Nagano (JP 2020166702), included in the IDS, is noted as teaching a similar reliability (accuracy) calculation to create a reliability map
Wada et al (WO 2021166621) teaches shape recognition for navigation and display of travelling direction on a display device.
Zhou et al (US 20210211568) also teaches estimating accuracy based on historical data generated from previously recorded images
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY E GLADE whose telephone number is (703)756-1502. The examiner can normally be reached 4-5-9 7:30-16:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at (571) 270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZACHARY E. F. GLADE/Examiner, Art Unit 3664
/KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664