DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The references listed on the information disclosure statement filed on 05/03/2024 have been considered by the Examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 5-14 and 29-36 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5, lines 7-8, recites “operate in a second mode that uses only one of the first laser scanner and the second laser scanner to determine the location of the robot”. It is unclear to the Examiner whether the second mode only uses scan information from one of the two laser scanners but the other scanner is operating in the background still or if the second mode uses (i.e., operates) exactly one of the two scanners to determine the location. In other words, it is unclear if the second mode is only using information from one scanner but the other is still operating or is it only actually operating one of the two scanners. Therefore, claim 5 is indefinite. For purposes of examination, the Examiner interprets both sensors to be operating but information if only being used from one of the two scanners. Claims 6-14 and 29-36 are rejected as being dependent upon a rejected claim.
Claim 6, line 3, recites “a position of the robot”. Claim 5, line 6, recites “a location of the robot”. It is unclear to the Examiner if “a position of the robot” is different from “a location of the robot”. In other words, these words are interchangeable and can imply the same information. Therefore, claim 6 is indefinite. Claims 7 and 29-30 are rejected for similar reasoning. For purposes of examination, the Examiner interprets position to be the same as location of the robot. Claims 8-9 and 30-31 are rejected as being dependent upon a rejected claim.
Claim 10 recites the limitation "the orientation of the robot" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 30, line 2, recites “a position of the robot”. Claim 29, line 2, recites “a position of the robot”. It is unclear to the Examiner if “a position of the robot” in claim 30 is the same as the previously recited position of the robot or if this is a different or new position being introduced. Therefore, claim 30 is indefinite. For purposes of examination, the Examiner interprets this to be the same position as the previously recited position of the robot. Claim 31 is rejected as being dependent upon a rejected claim.
Claim 36, lines 3-4, recites “wherein in response to failure of the first localization attempt”. It is unclear to the Examiner what constitutes “failure” of a localization attempt as failure is subjective. In other words, there is no criterion for failure and failure could include threshold confidence level for localization, that it is unable to localize within a specific timeframe or there is a threshold amount of error in the location of the robot. Therefore, claim 36 is indefinite. For purposes of examination, the Examiner interprets the failure to be a threshold amount of confidence or error in location when localizing.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 5 and 36 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Batts et al. (US 20200339151 A1).
Regarding claim 5, Batts teaches a mobile robot comprising: a drive system configured to move the mobile robot (¶[0015] and [0079] “autonomous vehicle”); a first laser scanner (¶[0168] “first subset of sensors” “LiDAR sensor”); a second laser scanner (¶[0168] “other LiDAR sensor”); and a localization system configured to operate in a first mode that uses both the first laser scanner and the second laser scanner to determine a location of the robot (¶[0015] “receiving, by a computer system of an autonomous vehicle (AV), information from a set of two or more sensors of the AV”; ¶[0079] “sensors 121 for measuring or inferring properties of state or condition of the AV 100, such as the AV's position”), the localization system configured to operate in a second mode that uses only one of the first laser scanner and the second laser scanner to determine the location of the robot (¶[0015] “in response to determining that the level of confidence of the received information from the at least one sensor of the first subset of sensors is less than the first threshold, determining, by the computer system, to adjust a driving function of the AV based on a second subset of sensors that excludes the first subset of sensors”).
Regarding claim 36, Batts teaches the mobile robot of Claim 5, wherein the localization system is configured to operate in the first mode to use information from both the first laser scanner and the second laser scanner to perform a first localization attempt (¶[0015] “receiving, by a computer system of an autonomous vehicle (AV), information from a set of two or more sensors of the AV”; ¶[0079] “sensors 121 for measuring or inferring properties of state or condition of the AV 100, such as the AV's position”), and wherein in response to failure of the first localization attempt (¶[0015] “in response to determining that the level of confidence of the received information from the at least one sensor of the first subset of sensors is less than the first threshold, determining, by the computer system, to adjust a driving function of the AV based on a second subset of sensors that excludes the first subset of sensors”), the localization system is configured to operate in the second mode to perform a second localization attempt using information from only one of the first laser scanner and the second laser scanner (¶[0015] “in response to determining that the level of confidence of the received information from the at least one sensor of the first subset of sensors is less than the first threshold, determining, by the computer system, to adjust a driving function of the AV based on a second subset of sensors that excludes the first subset of sensors”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Batts et al. (US 20200339151 A1) in view of Suvarna et al. (US 20200019169 A1).
Regarding claim 6, Batts does not explicitly teach the mobile robot of Claim 5, wherein the localization system is configured to determine whether to operate in the first mode or the second mode based at least in part on a position of the robot. However, Suvarna discloses automatic recognition of floorplans by a cleaning robot and teaches the mobile robot of Claim 5, wherein the localization system is configured to determine whether to operate in the first mode or the second mode based at least in part on a position of the robot (¶[0004] “robot automatically determines which floorplan it is in by localizing and trying to match its detected environment to the stored map” “no match above a confidence threshold, the robot assumes it is a new floor or the floorplan has changed (e.g., furniture has moved), and the robot initiates a discovery mode”, i.e., based upon its location being where there is not a matched floor plan it initiates another mode).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to determine whether to operate in the first mode or the second mode based at least in part on a position of the robot, as taught by Suvarna, to provide mapping the area and adding it to the robots memory. (Suvarna at ¶[0004])
Regarding claim 7, Batts does not explicitly teach the mobile robot of Claim 5, wherein the localization system is configured to compare a position of the robot to a designated area to determine whether the robot is inside of the designated area, wherein the localization system is configured to operate in the first mode when the robot is outside the designated area, and to operate in the second mode when the robot is inside the designated area. However, Suvarna discloses automatic recognition of floorplans by a cleaning robot and teaches the mobile robot of Claim 5, wherein the localization system is configured to compare a position of the robot to a designated area to determine whether the robot is inside of the designated area (¶[0004]-[0008] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., determining whether the environment is mapped or not), wherein the localization system is configured to operate in the first mode when the robot is outside the designated area, and to operate in the second mode when the robot is inside the designated area (¶[0004]-[0008] and [0072] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., if there is no match for the floorplan or area a discovery mode is activated and the environment is mapped and if the robot has a best match floor plan (i.e., known environment) then it begins a cleaning (i.e., cleaning mode)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to compare a position of the robot to a designated area to determine whether the robot is inside of the designated area, wherein the localization system is configured to operate in the first mode when the robot is outside the designated area, and to operate in the second mode when the robot is inside the designated area, as taught by Suvarna, to provide mapping the area and adding it to the robots memory. (Suvarna at ¶[0004])
Claim(s) 8-9 and 29-32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Batts et al. (US 20200339151 A1) in view of Suvarna et al. (US 20200019169 A1), as applied to claim 5 and 7 above, and in further view of Heinla et al. (US 20180253107 A1).
Regarding claim 8, the combination of Batts and Suvarna does not explicitly teach the mobile robot of Claim 7, wherein the localization system is configured to compare an orientation of the robot to a direction associated with the designated area, and to select one of the first laser scanner or the second laser scanner to use for localization based at least in part on the comparison of the orientation of the robot to the direction associated with the designated area. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches the mobile robot of Claim 7, wherein the localization system is configured to compare an orientation of the robot to a direction associated with the designated area (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras, i.e., front cameras could have no weight due to errors and it is using rear cameras for localization), and to select one of the first laser scanner or the second laser scanner to use for localization based at least in part on the comparison of the orientation of the robot to the direction associated with the designated area (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras could have no weight due to errors and it is using rear cameras for localization).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts as modified by Suvarna to provide, with a reasonable expectation of success, wherein the localization system is configured to compare an orientation of the robot to a direction associated with the designated area, and to select one of the first laser scanner or the second laser scanner to use for localization based at least in part on the comparison of the orientation of the robot to the direction associated with the designated area, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Regarding claim 9, the combination of Batts and Suvarna does not explicitly teach the mobile robot of Claim 7, wherein the localization system is configured to determine an angle between an orientation of the robot and a direction associated with the designated area, wherein the localization system uses the first laser scanner for localization when the angle is in a first angle range, and wherein the localization system uses the second laser scanner for localization when the angle is in a second angle range. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches the mobile robot of Claim 7, wherein the localization system is configured to determine an angle between an orientation of the robot and a direction associated with the designated area (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras, i.e., front cameras could have no weight due to errors and it is using rear cameras for localization), wherein the localization system uses the first laser scanner for localization when the angle is in a first angle range, and wherein the localization system uses the second laser scanner for localization when the angle is in a second angle range (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras could have no weight due to errors and it is using rear cameras for localization).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts as modified by Suvarna to provide, with a reasonable expectation of success, wherein the localization system is configured to determine an angle between an orientation of the robot and a direction associated with the designated area, wherein the localization system uses the first laser scanner for localization when the angle is in a first angle range, and wherein the localization system uses the second laser scanner for localization when the angle is in a second angle range, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Regarding claim 29, Batts does not explicitly teach the mobile robot of Claim 5, wherein the localization system is configured to compare a position of the mobile robot to a designated area to determine that the position of the mobile robot is outside the designated area, and wherein the localization system is configured to operate in the first mode in response to the determination that the position of the mobile robot is outside the designated area, by: performing a first laser scan using the first laser scanner to provide first laser scan information; performing a second laser scan using the second laser scanner to provide second laser scan information; and comparing the first laser scan information and the second laser scan information to a map of the environment to determine a new location of the mobile robot. However, Suvarna discloses automatic recognition of floorplans by a cleaning robot and teaches the mobile robot of Claim 5, wherein the localization system is configured to compare a position of the mobile robot to a designated area to determine that the position of the mobile robot is outside the designated area (¶[0004]-[0008] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., determining whether the environment is mapped or not), and wherein the localization system is configured to operate in the first mode in response to the determination that the position of the mobile robot is outside the designated area (¶[0004]-[0008] and [0072] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., if there is no match for the floorplan or area a discovery mode is activated and the environment is mapped and if the robot has a best match floor plan (i.e., known environment) then it begins a cleaning (i.e., cleaning mode)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to compare a position of the mobile robot to a designated area to determine that the position of the mobile robot is outside the designated area, and wherein the localization system is configured to operate in the first mode in response to the determination that the position of the mobile robot is outside the designated area, as taught by Suvarna, to provide mapping the area and adding it to the robots memory. (Suvarna at ¶[0004])
The combination of Batts and Suvarna does not explicitly teach performing a first laser scan using the first laser scanner to provide first laser scan information; performing a second laser scan using the second laser scanner to provide second laser scan information; and comparing the first laser scan information and the second laser scan information to a map of the environment to determine a new location of the mobile robot. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches performing a first laser scan using the first laser scanner to provide first laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras); performing a second laser scan using the second laser scanner to provide second laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras); and comparing the first laser scan information and the second laser scan information to a map of the environment to determine a new location of the mobile robot (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras, and forming a hypothesis on the robot's pose based on the combined data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, performing a first laser scan using the first laser scanner to provide first laser scan information; performing a second laser scan using the second laser scanner to provide second laser scan information; and comparing the first laser scan information and the second laser scan information to a map of the environment to determine a new location of the mobile robot, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Regarding claim 30, Batts does not explicitly teach the mobile robot of Claim 29, wherein the localization system is configured to compare a position of the mobile robot to a designated area to determine that the position of the mobile robot is inside the designated area, and wherein the localization system is configured to operate in the second mode in response to the determination that the position of the mobile robot is inside the designated area, by: selecting one of the first laser scanner or the second laser scanner based at least in part on the orientation of the mobile robot; performing a laser scan using the selected one of the first laser scanner or the second laser scanner to provide laser scan information; and comparing the laser scan information to a map of the environment to determine a new location of the mobile robot. However, Suvarna discloses automatic recognition of floorplans by a cleaning robot and teaches the mobile robot of Claim 29, wherein the localization system is configured to compare a position of the mobile robot to a designated area to determine that the position of the mobile robot is inside the designated area (¶[0004]-[0008] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., determining whether the environment is mapped or not), and wherein the localization system is configured to operate in the second mode in response to the determination that the position of the mobile robot is inside the designated area (¶[0004]-[0008] and [0072] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., if there is no match for the floorplan or area a discovery mode is activated and the environment is mapped and if the robot has a best match floor plan (i.e., known environment) then it begins a cleaning (i.e., cleaning mode)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to compare a position of the mobile robot to a designated area to determine that the position of the mobile robot is inside the designated area, and wherein the localization system is configured to operate in the second mode in response to the determination that the position of the mobile robot is inside the designated area, as taught by Suvarna, to provide mapping the area and adding it to the robots memory. (Suvarna at ¶[0004])
The combination of Batts and Suvarna does not explicitly teach selecting one of the first laser scanner or the second laser scanner based at least in part on the orientation of the mobile robot; performing a laser scan using the selected one of the first laser scanner or the second laser scanner to provide laser scan information; and comparing the laser scan information to a map of the environment to determine a new location of the mobile robot. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches selecting one of the first laser scanner or the second laser scanner based at least in part on the orientation of the mobile robot (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras, i.e., front cameras could have no weight due to errors and it is using rear cameras for localization); performing a laser scan using the selected one of the first laser scanner or the second laser scanner to provide laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras, i.e., front cameras could have no weight due to errors and it is using rear cameras for localization); and comparing the laser scan information to a map of the environment to determine a new location of the mobile robot (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and/or back cameras, and forming a hypothesis on the robot's pose based on the combined data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, selecting one of the first laser scanner or the second laser scanner based at least in part on the orientation of the mobile robot; performing a laser scan using the selected one of the first laser scanner or the second laser scanner to provide laser scan information; and comparing the laser scan information to a map of the environment to determine a new location of the mobile robot, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Regarding claim 31, the combination of Batts and Suvarna does not explicitly teach the mobile robot of Claim 30, wherein the localization system is configured to select the one of the first laser scanner or the second laser scanner based at least in part on a comparison of an orientation of the mobile robot to a direction associated with the designated area. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches the mobile robot of Claim 30, wherein the localization system is configured to select the one of the first laser scanner or the second laser scanner based at least in part on a comparison of an orientation of the mobile robot to a direction associated with the designated area (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras could have no weight due to errors and it is using rear cameras for localization).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to select the one of the first laser scanner or the second laser scanner based at least in part on a comparison of an orientation of the mobile robot to a direction associated with the designated area, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Regarding claim 32, Batts teaches the mobile robot of Claim 5, wherein the mobile robot comprises: a hardware processor (¶[0004] “processors”); and computer readable memory (¶[0004] “memory”) that includes a map of the environment (¶[0081] “stored information includes maps”), a robot location ([0120] “stored data” “physical locations in the field of view of the AV”).
Batts does not explicitly teach a designated area in the environment, and a direction associated with the designated area; the computer readable memory having instructions that are executable by the processor to cause the robot to: compare the robot location to the designated area to determine whether the robot location is inside or outside the designated area; when the robot location is determined to be outside the designated area, operate the localization system in the first mode to: perform a first laser scan using the first laser scanner to provide first laser scan information; perform a second laser scan using the second laser scanner to provide second laser scan information; and compare the first laser scan information and the second laser scan information to the map of the environment to determine a new robot location; and when the robot location is determined to be inside the designated area, operate the localization system in the second mode to: compare an orientation of the robot to the direction associated with the designated area to determine an angle between the orientation of the robot and the direction associated with the designated area; identify one of the first laser scanner or the second laser scanner based at least in part on the determined angle; perform a laser scan using the one of the first laser scanner or the second laser scanner to provide laser scan information; and compare the laser scan information to the map of the environment to determine a new robot location. However, Suvarna discloses automatic recognition of floorplans by a cleaning robot and teaches a designated area in the environment (¶[0004]-[0008] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., determining whether the environment is mapped or not , where the unmapped area could be a designated area); the computer readable memory having instructions that are executable by the processor to cause the robot to: compare the robot location to the designated area to determine whether the robot location is inside or outside the designated area (¶[0004]-[0008] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., determining whether the environment is mapped or not); when the robot location is determined to be outside the designated area, operate the localization system in the first mode (¶[0004]-[0008] and [0072] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., if there is no match for the floorplan or area a discovery mode is activated and the environment is mapped and if the robot has a best match floor plan (i.e., known environment) then it begins a cleaning (i.e., cleaning mode)) and when the robot location is determined to be inside the designated area, operate the localization system in the second mode (¶[0004]-[0008] and [0072] “robot performs place recognition by trying to match its environment with different possible locations within a floorplan”, i.e., if there is no match for the floorplan or area a discovery mode is activated and the environment is mapped and if the robot has a best match floor plan (i.e., known environment) then it begins a cleaning (i.e., cleaning mode)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, a designated area in the environment, and the computer readable memory having instructions that are executable by the processor to cause the robot to: compare the robot location to the designated area to determine whether the robot location is inside or outside the designated area; when the robot location is determined to be outside the designated area, operate the localization system in the first mode and when the robot location is determined to be inside the designated area, operate the localization system in the second mode, as taught by Suvarna, to provide mapping the area and adding it to the robots memory. (Suvarna at ¶[0004])
The combination of Batts and Suvarna does not explicitly teach a direction associated with the designated area; the first mode to: perform a first laser scan using the first laser scanner to provide first laser scan information; perform a second laser scan using the second laser scanner to provide second laser scan information; and compare the first laser scan information and the second laser scan information to the map of the environment to determine a new robot location; the second mode to: compare an orientation of the robot to the direction associated with the designated area to determine an angle between the orientation of the robot and the direction associated with the designated area; identify one of the first laser scanner or the second laser scanner based at least in part on the determined angle; perform a laser scan using the one of the first laser scanner or the second laser scanner to provide laser scan information; and compare the laser scan information to the map of the environment to determine a new robot location. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches a direction associated with the designated area (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras, e.g., front cameras could have no weight due to errors and it is using rear cameras in a rear direction for localization); the first mode to: perform a first laser scan using the first laser scanner to provide first laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras); perform a second laser scan using the second laser scanner to provide second laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras); and compare the first laser scan information and the second laser scan information to the map of the environment to determine a new robot location (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras, and forming a hypothesis on the robot's pose based on the combined data); the second mode to: compare an orientation of the robot to the direction associated with the designated area to determine an angle between the orientation of the robot and the direction associated with the designated area (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras, i.e., front cameras could have no weight due to errors and it is using rear cameras for localization); identify one of the first laser scanner or the second laser scanner based at least in part on the determined angle (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras could have no weight due to errors and it is using rear cameras for localization); perform a laser scan using the one of the first laser scanner or the second laser scanner to provide laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras could have no weight due to errors and it is using rear cameras for localization); and compare the laser scan information to the map of the environment to determine a new robot location (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras, and forming a hypothesis on the robot's pose based on the combined data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, a direction associated with the designated area; the first mode to: perform a first laser scan using the first laser scanner to provide first laser scan information; perform a second laser scan using the second laser scanner to provide second laser scan information; and compare the first laser scan information and the second laser scan information to the map of the environment to determine a new robot location; the second mode to: compare an orientation of the robot to the direction associated with the designated area to determine an angle between the orientation of the robot and the direction associated with the designated area; identify one of the first laser scanner or the second laser scanner based at least in part on the determined angle; perform a laser scan using the one of the first laser scanner or the second laser scanner to provide laser scan information; and compare the laser scan information to the map of the environment to determine a new robot location, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Claim(s) 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Batts et al. (US 20200339151 A1) in view of Heinla et al. (US 20180253107 A1).
Regarding claim 10, Batts does not explicitly teach the mobile robot of Claim 5, wherein the localization system is configured to select one of the first laser scanner or the second laser scanner to use for localization based at least in part on the orientation of the robot. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches the mobile robot of Claim 5, wherein the localization system is configured to select one of the first laser scanner or the second laser scanner to use for localization based at least in part on the orientation of the robot (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras could have no weight due to errors and it is using rear cameras for localization).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to select one of the first laser scanner or the second laser scanner to use for localization based at least in part on the orientation of the robot, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Regarding claim 11, Batts does not explicitly teach the mobile robot of Claim 5, wherein the localization system is configured to perform the first mode of operation by: performing a first laser scan using the first laser scanner to provide first laser scan information; performing a second laser scan using the second laser scanner to provide second laser scan information; and comparing the first laser scan information and the second laser scan information to a map of the environment to determine an updated robot location. However, Heinla discloses a mobile robot system and method for autonomous localization and teaches the mobile robot of Claim 5, wherein the localization system is configured to perform the first mode of operation by: performing a first laser scan using the first laser scanner to provide first laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras); performing a second laser scan using the second laser scanner to provide second laser scan information (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras); and comparing the first laser scan information and the second laser scan information to a map of the environment to determine an updated robot location (Fig. 2 and ¶[0020] “at least two cameras (i.e., sensors)” and discloses weighting errors associated with each of the cameras (i.e., comparing), e.g., front cameras and back cameras, and forming a hypothesis on the robot's pose based on the combined data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the localization system is configured to perform the first mode of operation by: performing a first laser scan using the first laser scanner to provide first laser scan information; performing a second laser scan using the second laser scanner to provide second laser scan information; and comparing the first laser scan information and the second laser scan information to a map of the environment to determine an updated robot location, as taught by Heinla, to provide reducing the pose error estimate. (Heinla at ¶[0036])
Claim(s) 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Batts et al. (US 20200339151 A1) in view of Pierce et al. (EP 3063585 B1).
Regarding claim 12, Batts does not explicitly teach the mobile robot of Claim 5, wherein the first laser scanner and the second laser scanner together provide a laser scanning range that extends 360 degrees around the robot. However, Pierce discloses a robot including scanning range finders and teaches the mobile robot of Claim 5, wherein the first laser scanner and the second laser scanner together provide a laser scanning range that extends 360 degrees around the robot (Fig. 13B and ¶[0034] “scanning laser range finder mounted in various positions” “360 full coverage”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the first laser scanner and the second laser scanner together provide a laser scanning range that extends 360 degrees around the robot, as taught by Pierce, to provide full 360-degree coverage. (Pierce at ¶[0034])
Regarding claim 13, Batts does not explicitly teach the mobile robot of Claim 5, wherein the first laser scanner has a laser scan range of at least about 180 degrees, and wherein the second laser scanner has a laser scan range of at least about 180 degrees. However, Pierce discloses a robot including scanning range finders and teaches the mobile robot of Claim 5, wherein the first laser scanner has a laser scan range of at least about 180 degrees (Fig. 13B and ¶[0034] “scanned areas 1305, 1310 may each cover less than 360 degrees” “but may overlap such that the combination of the scanned areas 1305, 1310 provides full 360 degree coverage”), and wherein the second laser scanner has a laser scan range of at least about 180 degrees (Fig. 13B and ¶[0034] “scanned areas 1305, 1310 may each cover less than 360 degrees” “but may overlap such that the combination of the scanned areas 1305, 1310 provides full 360 degree coverage”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the first laser scanner has a laser scan range of at least about 180 degrees, and wherein the second laser scanner has a laser scan range of at least about 180 degrees, as taught by Pierce, to provide full 360-degree coverage. (Pierce at ¶[0034])
Regarding claim 14, Batts does not explicitly teach the mobile robot of Claim 5, wherein a laser scan range of the first laser scanner overlaps a laser scan range of the second laser scanner. However, Pierce discloses a robot including scanning range finders and teaches the mobile robot of Claim 5, wherein a laser scan range of the first laser scanner overlaps a laser scan range of the second laser scanner (Fig. 13B and ¶[0034] “scanned areas 1305, 1310 may each cover less than 360 degrees” “but may overlap such that the combination of the scanned areas 1305, 1310 provides full 360 degree coverage”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein a laser scan range of the first laser scanner overlaps a laser scan range of the second laser scanner, as taught by Pierce, to provide full 360-degree coverage. (Pierce at ¶[0034])
Claim(s) 33-35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Batts et al. (US 20200339151 A1) in view of Suvarna et al. (US 20200019169 A1) in further view of Heinla et al. (US 20180253107 A1), as applied to claim 32 above, and in further view of Pierce et al. (EP 3063585 B1).
Regarding claim 33, the combination of Batts, Suvarna and Heinla does not explicitly teach the mobile robot of Claim 32, wherein the first laser scanner and the second laser scanner together provide a laser scanning range that extends 360 degrees around the mobile robot. However, Pierce discloses a robot including scanning range finders and teaches the mobile robot of Claim 32, wherein the first laser scanner and the second laser scanner together provide a laser scanning range that extends 360 degrees around the mobile robot (Fig. 13B and ¶[0034] “scanning laser range finder mounted in various positions” “360 full coverage”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the first laser scanner and the second laser scanner together provide a laser scanning range that extends 360 degrees around the mobile robot, as taught by Pierce, to provide full 360-degree coverage. (Pierce at ¶[0034])
Regarding claim 34, the combination of Batts, Suvarna and Heinla does not explicitly teach the mobile robot of Claim 32, wherein the first laser scanner has a laser scan range of at least about 180 degrees, and wherein the second laser scanner has a laser scan range of at least about 180 degrees. However, Pierce discloses a robot including scanning range finders and teaches the mobile robot of Claim 32, wherein the first laser scanner has a laser scan range of at least about 180 degrees (Fig. 13B and ¶[0034] “scanned areas 1305, 1310 may each cover less than 360 degrees” “but may overlap such that the combination of the scanned areas 1305, 1310 provides full 360 degree coverage”), and wherein the second laser scanner has a laser scan range of at least about 180 degrees (Fig. 13B and ¶[0034] “scanned areas 1305, 1310 may each cover less than 360 degrees” “but may overlap such that the combination of the scanned areas 1305, 1310 provides full 360 degree coverage”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein the first laser scanner has a laser scan range of at least about 180 degrees, and wherein the second laser scanner has a laser scan range of at least about 180 degrees, as taught by Pierce, to provide full 360-degree coverage. (Pierce at ¶[0034])
Regarding claim 35, the combination of Batts, Suvarna and Heinla does not explicitly teach the mobile robot of Claim 32, wherein a laser scan range of the first laser scanner overlaps a laser scan range of the second laser scanner. However, Pierce discloses a robot including scanning range finders and teaches the mobile robot of Claim 32, wherein a laser scan range of the first laser scanner overlaps a laser scan range of the second laser scanner (Fig. 13B and ¶[0034] “scanned areas 1305, 1310 may each cover less than 360 degrees” “but may overlap such that the combination of the scanned areas 1305, 1310 provides full 360 degree coverage”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the systems and methods for implementing an autonomous vehicle response to sensor failure of Batts to provide, with a reasonable expectation of success, wherein a laser scan range of the first laser scanner overlaps a laser scan range of the second laser scanner, as taught by Pierce, to provide full 360-degree coverage. (Pierce at ¶[0034])
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Zhou et al. (US 10240930 B2) is pertinent because it relates to sensor fusion and collecting positional information for a movable object.
Holz (US 20180307241 A1) is pertinent because it relates to localization with negative mapping.
Nehmadi et al. (US 20180232947 A1) is pertinent because it is a system for generating maps of a scene using a plurality of sensors.
Gruver et al. (US 20170219713 A1) is pertinent because it is a vehicle with multiple LIDARs.
Karlsson (US 20050182518 A1) is pertinent because it relates to sensor fusion for mapping and localization.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Connor L Knight whose telephone number is (571)272-5817. The examiner can normally be reached Mon-Fri 8:30AM-4:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313)446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.L.K/Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666