DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
Claims 1, 2, and 5-18 are pending.
Claims 1 and 18 are independent.
Claims 1 and 18 have been amended.
Claims 3 and 4 have been canceled.
This FINAL action is in response to “Amendments and Remarks” received on 22 January 2026.
Response to Amendment/Remarks
With respect to Applicant’s remarks filed 22 January 2026, Applicant’s “Amendments and Remarks” have been fully considered and were not wholly persuasive. Applicant’s remarks will be addressed in sequential order as they were presented.
With respect to objections to the Specification, Applicant’s “Amendments and Remarks” have been fully considered and are persuasive. Therefore, the objections to the Specification have been withdrawn.
With respect to claim rejections under 35 U.S.C. 102 and/or 35 U.S.C. 103, Applicant’s “Amendments and Remarks” have been fully considered and are not persuasive. Therefore, the rejection is maintained.
Applicant argues that the prior art reference used under 35 U.S.C. 102, Adams et al. (US 20210182596 A1), does not disclose determine that the shadowed area is sufficiently mapped and in response thereto enter the shadowed area and determine that the shadow area is sufficiently mapped by determining that there are stored objects that will be visible to the robotic work tool along a planned path of the robotic work tool (See “Remarks Page 7).
The disclosure of Adams et al. states that an autonomous vehicle traversing a section of road wherein GPS signals are obstructed and unreliable (e.g., tunnels) that the vehicle switch to a vision-based system to navigate. Regarding if an area is sufficiently mapped and determining that objects are along a planned path of a robotic device, i.e., autonomous vehicle, Adams et al. discloses that objects that repeat or are common in an environment of a vehicle will be used to navigate the vehicle when GPS signals are unreliable. Additionally, areas of GPS obstructions are pre-mapped such that location of static features/objects are known ([0080]-[0082]). This means that an area of GPS obstruction is sufficiently mapped and statics obstacles a vehicle will encounter are known through repeated driving.
Final Office Action
Claim Interpretation
During examination, claims are given the broadest reasonable interpretation consistent with the specification and limitations in the specification are not read into the claims. See MPEP §2111, MPEP §2111.01 and In re Yamamoto et al., 222 USPQ 934 10 (Fed. Cir. 1984). Under a broadest reasonable interpretation, words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. See MPEP 2111.01 (I). It is further noted it is improper to import claim limitations from the specification, i.e., a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment. See 15 MPEP 2111.01 (II).
A first exception to the prohibition of reading limitations from the specification into the claims is when the Applicant for patent has provided a lexicographic definition for the term. See MPEP §2111.01 (IV). Following a review of the claims in view of the specification herein, the Office has found that Applicant has not provided any lexicographic definitions, either expressly or implicitly, for any claim terms or phrases with any reasonable clarity, deliberateness and precision. Accordingly, the Office concludes that Applicant has not acted as his/her own lexicographer.
A second exception to the prohibition of reading limitations from the specification into the claims is when the claimed feature is written as a means-plus-function. See 35 U.S.C. §112(f) and MPEP §2181-2183. As noted in MPEP §2181, a three-prong test is used to determine the scope of a means-plus-function limitation in a claim:
(A) the claim limitation uses the term "means" or "step" or a term used as a substitute for "means" that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function
(B) the term "means" or "step" or the generic placeholder is modified by functional language, typically, but not always linked by the transition word "for" (e.g., "means for") or another linking word or phrase, such as "configured to" or "so that"
(C) the term "means" or "step" or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
The Office has found herein that the claims do not contain limitations of means or means type language that must be analyzed under 35 U.S.C. §112 (f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 and 11-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Adams et al. (US 20200192388 A1), hereinafter Adams.
Regarding claim 1, Adams discloses:
A robotic work tool arranged to operate in an operational area, the robotic work tool comprising a memory configured to store a location of at least one object, a distance sensor, a navigation sensor being based on signal-reception and a controller, wherein the controller is configured to (Abstract, Techniques are discussed for determining a location of a vehicle in an environment using a feature corresponding to a portion of an image representing an object in the environment which is associated with a frequently occurring object classification. For example, an image may be received and semantically segmented to associate pixels of the image with a label representing an object of an object type (e.g., extracting only those portions of the image which represent lane boundary markings). Features may then be extracted, or otherwise determined, which are limited to those portions of the image. In some examples, map data indicating a previously mapped location of a corresponding portion of the object may be used to determine a difference. The difference (or sum of differences for multiple observations) are then used to localize the vehicle with respect to the map; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time; [0017], The techniques described herein can be implemented in a number of ways. Example implementations are provided below with reference to the following figures. Although discussed in the context of an autonomous vehicle, the methods, apparatuses, and systems described herein can be applied to a variety of systems (e.g., a sensor system or a robotic platform), and is not limited to autonomous vehicles. In one example, similar techniques may be utilized in driver-controlled vehicles in which such a system may provide an indication to a driver of the vehicle of whether it is safe to perform various maneuvers. In another example, the techniques can be utilized in an aviation context, or in any system navigating in a system with repeating objects. Additionally, the techniques described herein can be used with real data (e.g., captured using sensor(s)), simulated data (e.g., generated by a simulator), or any combination of the two.):
determine a location of the robotic work tool utilizing the navigation sensor (Fig. 4; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques.);
determine that a shadowed area is encountered, wherein navigation utilizing the navigation sensor is not reliable ([0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
and in response thereto navigate utilizing the distance sensor based on detecting at least one object and a distance to the at least one object utilizing the sensor, the stored at least one object and the location of the robotic work tool ([0011], Sensor data captured by the vehicle can include lidar data, radar data, image data, time of flight data, sonar data, odometer data (such as wheel encoders), IMU data, and the like. In some cases, the sensor data can be provided to a perception system configured to determine a type (classification) of an object (e.g., vehicle, pedestrian, bicycle, motorcycle, animal, parked car, tree, building, and the like) in the environment; [0012], For instance, the sensor data may be captured by the vehicle as the vehicle traverses an environment.);
to determine that the shadowed area is sufficiently mapped and in response thereto enter the shadowed area ([0010], Thus, in some examples, the techniques described
herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle);
to determine that the shadowed area is sufficiently mapped by determining that there are stored objected that will be visible to the robotic work tool along a planned path of the robotic work tool ([0010], Thus, in some examples, the techniques described
herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle).
Regarding claim 2, Adams discloses:
wherein the controller is further configured to determine that the shadowed area is encountered by determining that the robotic work tool has entered the shadowed area ([0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.).
Regarding claim 3, Adams discloses:
wherein the controller is further configured to determine that the shadowed area is sufficiently mapped and in response thereto enter the shadowed area ([0010], Thus, in some examples, the techniques described herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle.).
Regarding claim 4, Adams discloses:
wherein the controller is further configured to determine that the shadowed area is sufficiently mapped by determining that there are stored objects that will be visible to the robotic work tool along a planned path of the robotic work tool ([0010], Thus, in some examples, the techniques described herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle.).
Regarding claim 5, Adams discloses:
wherein the controller is further configured to determine that determining that the robotic work tool has entered the shadowed area and in response thereto exit the shadowed area ([0050], In some examples, the one or more maps 424 can store sizes or dimensions of objects associated with individual locations in an environment. For example, as the vehicle 402 traverses the environment and as maps representing an area proximate to the vehicle 402 are loaded into memory, one or more sizes or dimensions of objects associated with a location can be loaded into memory as well. In some examples, a known size or dimension of an object at a particular location in the environment may be used to determine a depth of a feature of an object relative to the vehicle 402 when determining a location of the vehicle 402.).
Regarding claim 6, Adams discloses:
wherein the controller is further configured to determine that the shadowed area is insufficiently mapped and in response thereto map the shadowed area by ([0010], Thus, in some examples, the techniques described
herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle.):
detecting at least one object at a first position (Fig. 1; Fig. 2; Fig. 3; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
change to a second position (Fig. 1; Fig. 2; Fig. 3; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
and detecting at least one second object at the second position (Fig. 1; Fig. 2; Fig. 3; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.).
Regarding claim 7, Adams discloses:
wherein the controller is further configured to change to the second position by moving to the second position ([0062], In at least one example, the vehicle 402 can include one or more drive systems 414. In some examples, the vehicle 402 can have a single drive system 414. In at least one example, if the vehicle 402 has multiple drive systems 414, individual drive systems 414 can be positioned on opposite ends of the vehicle 402 (e.g., the front and the rear, etc.). In at least one example, the drive system(s) 414 can include one or more sensor systems to detect conditions of the drive system(s) 414 and/or the surroundings of the vehicle 402. By way of example and not limitation, the sensor system(s) can include one or more wheel encoders (e.g., rotary encoders) to sense rotation of the wheels of the drive modules, inertial sensors ( e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive module, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive system, lidar sensors, radar sensors, etc. Some sensors, such as the wheel encoders can be unique to the drive system(s) 414. In some cases, the sensor system(s) on the drive system(s) 414 can overlap or supplement corresponding systems of the vehicle 402 (e.g., sensor system(s) 406). Wheel encoders, inertial sensors, other sensors included in the drive systems 414 may be used to measure motion of the vehicle 402 and use the measured motion to estimate a position of the vehicle when other systems (e.g., GPS, SLAM, CLAMS, etc.) are unavailable, and/or in between repeated objects used to localize the vehicle 402 according to the techniques described herein.).
Regarding claim 11, Adams discloses:
wherein the controller is further configured to proactively map a future shadowed area by (Fig. 1; Fig. 2; Fig. 3; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
detecting at least one object at a first position (Fig. 1; Fig. 2; Fig. 3; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
change to a second position (Fig. 1; Fig. 2; Fig. 3; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
and detecting at least one second object at the second position, regardless detection of a shadowed area ([0062], In at least one example, the vehicle 402 can include one or more drive systems 414. In some examples, the vehicle 402 can have a single drive system 414. In at least one example, if the vehicle 402 has multiple drive systems 414, individual drive systems 414 can be positioned on opposite ends of the vehicle 402 (e.g., the front and the rear, etc.). In at least one example, the drive system(s) 414 can include one or more sensor systems to detect conditions of the drive system(s) 414 and/or the surroundings of the vehicle 402. By way of example and not limitation, the sensor system(s) can include one or more wheel encoders (e.g., rotary encoders) to sense rotation of the wheels of the drive modules, inertial sensors ( e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive module, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive system, lidar sensors, radar sensors, etc. Some sensors, such as the wheel encoders can be unique to the drive system(s) 414. In some cases, the sensor system(s) on the drive system(s) 414 can overlap or supplement corresponding systems of the vehicle 402 (e.g., sensor system(s) 406). Wheel encoders, inertial sensors, other sensors included in the drive systems 414 may be used to measure motion of the vehicle 402 and use the measured motion to estimate a position of the vehicle when other systems (e.g., GPS, SLAM, CLAMS, etc.) are unavailable, and/or in between repeated objects used to localize the vehicle 402 according to the techniques described herein.).
Regarding claim 12, Adams discloses:
wherein the distance sensor is a radar sensor (Fig. 4; [0011], Sensor data captured by the vehicle can include lidar data, radar data, image data, time of flight data, sonar data, odometer data (such as wheel encoders), IMU data, and the like. In some cases, the sensor data can be provided to a perception system configured to determine a type (classification) of an object (e.g., vehicle, pedestrian, bicycle, motorcycle, animal, parked car, tree, building, and the like) in the environment.).
Regarding claim 13, Adams discloses:
wherein the controller is further configured to detect an object utilizing the distance sensor by ([0014], In at least some examples, the SemSeg localization component may determine a location of the feature with respect to the vehicle, such as by using lidar, radar, time of flight data, multi-view geometry from a plurality of image sensors, or other techniques for determining a depth of a point in the environment associated with the feature. For example, lidar points corresponding to a region of interest ("ROI") in the image and corresponding to the feature of the object may be combined with image data. The lidar data may be interpolated and/or extrapolated (e.g., based on triangle building) in order to associate a depth with a particular feature in the image. For instance, a mesh may be created from corresponding lidar points and an intersection point may be found between a ray originating at a center of the camera and passing through the feature in the image and the mesh. A location of the feature may be determined by selecting a mode or mean of the cluster of lidar points as projected into the image space representing the feature in the image. In some examples, the location of the mode or median of the cluster of lidar points may be used as the point for which to determine a depth of the vehicle from the feature of the object. Additional details regarding using lidar data and image data to determine a depth can be found in U.S. patent application Ser. No. 15/970,838, which is incorporated by reference herein in its entirety.):
receiving a radar point cloud ([0014], In at least some examples, the SemSeg localization component may determine a location of the feature with respect to the vehicle, such as by using lidar, radar, time of flight data, multi-view geometry from a plurality of image sensors, or other techniques for determining a depth of a point in the environment associated with the feature. For example, lidar points corresponding to a region of interest ("ROI") in the image and corresponding to the feature of the object may be combined with image data. The lidar data may be interpolated and/or extrapolated (e.g., based on triangle building) in order to associate a depth with a particular feature in the image. For instance, a mesh may be created from corresponding lidar points and an intersection point may be found between a ray originating at a center of the camera and passing through the feature in the image and the mesh. A location of the feature may be determined by selecting a mode or mean of the cluster of lidar points as projected into the image space representing the feature in the image. In some examples, the location of the mode or median of the cluster of lidar points may be used as the point for which to determine a depth of the vehicle from the feature of the object. Additional details regarding using lidar data and image data to determine a depth can be found in U.S. patent application Ser. No. 15/970,838, which is incorporated by reference herein in its entirety.);
determining a location of the radar point cloud ([0014], In at least some examples, the SemSeg localization component may determine a location of the feature with respect to the vehicle, such as by using lidar, radar, time of flight data, multi-view geometry from a plurality of image sensors, or other techniques for determining a depth of a point in the environment associated with the feature. For example, lidar points corresponding to a region of interest ("ROI") in the image and corresponding to the feature of the object may be combined with image data. The lidar data may be interpolated and/or extrapolated (e.g., based on triangle building) in order to associate a depth with a particular feature in the image. For instance, a mesh may be created from corresponding lidar points and an intersection point may be found between a ray originating at a center of the camera and passing through the feature in the image and the mesh. A location of the feature may be determined by selecting a mode or mean of the cluster of lidar points as projected into the image space representing the feature in the image. In some examples, the location of the mode or median of the cluster of lidar points may be used as the point for which to determine a depth of the vehicle from the feature of the object. Additional details regarding using lidar data and image data to determine a depth can be found in U.S. patent application Ser. No. 15/970,838, which is incorporated by reference herein in its entirety.);
and record the object at the determined location ([0014], In at least some examples, the SemSeg localization component may determine a location of the feature with respect to the vehicle, such as by using lidar, radar, time of flight data, multi-view geometry from a plurality of image sensors, or other techniques for determining a depth of a point in the environment associated with the feature. For example, lidar points corresponding to a region of interest ("ROI") in the image and corresponding to the feature of the object may be combined with image data. The lidar data may be interpolated and/or extrapolated (e.g., based on triangle building) in order to associate a depth with a particular feature in the image. For instance, a mesh may be created from corresponding lidar points and an intersection point may be found between a ray originating at a center of the camera and passing through the feature in the image and the mesh. A location of the feature may be determined by selecting a mode or mean of the cluster of lidar points as projected into the image space representing the feature in the image. In some examples, the location of the mode or median of the cluster of lidar points may be used as the point for which to determine a depth of the vehicle from the feature of the object. Additional details regarding using lidar data and image data to determine a depth can be found in U.S. patent application Ser. No. 15/970,838, which is incorporated by reference herein in its entirety.).
Regarding claim 14, Adams discloses:
wherein the controller is further configured to determine that the object is not moving prior to recording the determined location ([0001], Various methods, apparatuses, and systems are utilized by autonomous vehicles to guide such autonomous vehicles through environments including various static and dynamic objects. For instance, autonomous vehicles utilize route planning methods, apparatuses, and systems to guide autonomous vehicles through congested areas with other moving vehicles (autonomous or otherwise), moving people, stationary buildings, etc. In some examples, an autonomous vehicle may make decisions while traversing an environment to ensure safety for passengers and surrounding persons and objects. A variety of sensors may be used to collect information, such as images, of the surrounding environment, which may be used by the autonomous vehicle to make decisions on how to traverse the environment. Accurately determining a location of the vehicle in the environment may, at times, present challenges.).
Regarding claim 15, Adams discloses:
determine an extension of the radar point cloud ([0024], An optional operation 134 includes determining an error of a measurement (position of the detected feature) with a pre-mapped, known, location of an associated landmark (e.g., corresponding point in the environment associated with the feature in the image) with respect to a map. In some such examples, the known landmark may be projected into the image (e.g., a corresponding pixel location of the landmark) and an image coordinate associated with a location of the feature within the image may be determined. Using the image coordinate associated with the location of between a landmark image coordinate (e.g., determined from prior mapping runs) and the image coordinate associated with the location of the feature in the image (e.g., as a two-dimensional vector, weighted Euclidian distance, etc.). In other examples, the detected feature in the image may be unprojected to find an intersection with a mesh (and/or otherwise find a three-dimensional position of the feature by using associated depth measurements from lidar, radar, time of flight, stereo images, etc.). The error may then be determined as the difference between the three-dimensional feature location and the location of the landmark (e.g., as a three-dimensional vector, a weighted Euclidian distance, etc.).),
determine an assumed shadow based on the extension of the radar point cloud and map an area of the assumed shadow ([0013], Additionally or alternatively, the SemSeg localization component may detect a feature of the object in the semantically segmented image. As noted above, objects that are repeated in an environment such as lane markers and light fixtures can be used to extract features for localization. For example, dashed lane markers typically are rectangular in shape, having long straight edges and 90-degree corners. Further, the SemSeg localization component may in some cases narrow the likelihood that an object is a lane marker based on detecting that the lane marker is depicted on a drivable surface in the semantically segmented image. Alternatively or additionally, when creating the map, multiple observations of the features may be collected and combined in image space and/or by unprojecting (e.g., finding an intersection of a ray passing through the point in the image with the 3D map). In at least some examples, no such map may be used. In such examples, bundle adjustment, structure from motion, Kalman filters, or other estimations may be used to jointly estimate both positions of observations and the position and/or orientation of the vehicle in the environment.).
Regarding claim 16, Adams discloses:
wherein the controller is further configured to perform visual classification of the object and determine the assumed shadow also based on the visual classification (Abstract, Techniques are discussed for determining a location of a vehicle in an environment using a feature corresponding to a portion of an image representing an object in the environment which is associated with a frequently occurring object classification. For example, an image may be received and semantically segmented to associate pixels of the image with a label representing an object of an object type (e.g., extracting only those portions of the image which represent lane boundary markings). Features may then be extracted, or otherwise determined, which are limited to those portions of the image. In some examples, map data indicating a previously mapped location of a corresponding portion of the object may be used to determine a difference. The difference (or sum of differences for multiple observations) are then used to localize the vehicle with respect to the map).
Regarding claim 17, Adams discloses:
wherein the map is stored in a remote memory connected to the robotic work tool directly or indirectly (Fig. 4; [0045], The vehicle computing device(s) 404 can include one or more processors 416 and memory 418 communicatively coupled with the one or more processors 416. In the illustrated example, the vehicle 402 is an autonomous vehicle; however, the vehicle 402 could be any other type of vehicle or robotic platform. In the illustrated example, the memory 418 of the vehicle computing device(s) 404 stores a localization component 420, a perception component 422, one or more maps 424, one or more system controllers 426, a semantic segmentation (SemSeg) localization component 428, a semantic segmentation component 430, location determination component 432, and a planning component 434. Though depicted in FIG. 4 as residing in the memory 418 for illustrative purposes, it is contemplated that the localization component 420, the perception component 422, the one or more maps 424, the one or more system controllers 426, the SemSeg localization component 428, the semantic segmentation component 430, the location determination component 432, and the planning component 434 can additionally, or alternatively, be accessible to the vehicle 402 (e.g., stored on, or otherwise accessible by, memory remote from the vehicle 402).).
Regarding claim 18, Adams discloses:
A method for use in a robotic work tool arranged to operate in an operational area, the robotic work tool comprising a memory configured to store a location of at least one object, a distance sensor, a navigation sensor being based on signal-reception and a controller, the method comprising (Abstract, Techniques are discussed for determining a location of a vehicle in an environment using a feature corresponding to a portion of an image representing an object in the environment which is associated with a frequently occurring object classification. For example, an image may be received and semantically segmented to associate pixels of the image with a label representing an object of an object type (e.g., extracting only those portions of the image which represent lane boundary markings). Features may then be extracted, or otherwise determined, which are limited to those portions of the image. In some examples, map data indicating a previously mapped location of a corresponding portion of the object may be used to determine a difference. The difference (or sum of differences for multiple observations) are then used to localize the vehicle with respect to the map; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time; [0017], The techniques described herein can be implemented in a number of ways. Example implementations are provided below with reference to the following figures. Although discussed in the context of an autonomous vehicle, the methods, apparatuses, and systems described herein can be applied to a variety of systems (e.g., a sensor system or a robotic platform), and is not limited to autonomous vehicles. In one example, similar techniques may be utilized in driver-controlled vehicles in which such a system may provide an indication to a driver of the vehicle of whether it is safe to perform various maneuvers. In another example, the techniques can be utilized in an aviation context, or in any system navigating in a system with repeating objects. Additionally, the techniques described herein can be used with real data (e.g., captured using sensor(s)), simulated data (e.g., generated by a simulator), or any combination of the two.):
determining a location of the robotic work tool utilizing the navigation sensor (Fig. 4; [0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques.);
determining that a shadowed area is encountered, wherein navigation utilizing the navigation sensor is not reliable ([0009], This disclosure relates to determining a location of a vehicle (e.g., an autonomous vehicle) in an environment using locations of objects having a repeated object classification, such as lane markings, detected by the vehicle as the vehicle traverses the environment. In many cases, vehicles receive location information from sources such as global positioning systems (GPS), odometers, inertial measurement units (IMUs), simultaneous localization and mapping (SLAM) systems, calibration localization and mapping simultaneously (CLAMS) systems, among other techniques. However, in some cases, these techniques may provide insufficient information to a vehicle to maintain an accurate trajectory. For instance, in a location such as a tunnel, a vehicle may be unable to access a reliable GPS signal, and may not have differentiated landmarks inside the tunnel to rely upon SLAM and/or CLAMS systems. While odometers and IMUs may be used in such cases where GPS, SLAM, and/or CLAMS may be less reliable, some error may be present with these sources of location data, which may compound over time.);
and in response thereto navigating utilizing the distance sensor based on detecting at least one object and a distance to the at least one object utilizing the sensor, the stored at least one object and the location of the robotic work tool ([0011], Sensor data captured by the vehicle can include lidar data, radar data, image data, time of flight data, sonar data, odometer data (such as wheel encoders), IMU data, and the like. In some cases, the sensor data can be provided to a perception system configured to determine a type (classification) of an object (e.g., vehicle, pedestrian, bicycle, motorcycle, animal, parked car, tree, building, and the like) in the environment; [0012], For instance, the sensor data may be captured by the vehicle as the vehicle traverses an environment);
determining that the shadowed area is sufficiently mapped and in response thereto enter the shadowed area ([0010], Thus, in some examples, the techniques described
herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle);
determining that the shadowed area is sufficiently mapped by determining that there are stored objects that will be visible to the robotic work tool along a planned path of the robotic work tool ([0010], Thus, in some examples, the techniques described
herein may supplement other localization systems to provide continuous, accurate determinations of a location of a vehicle. For instance, a semantic segmentation (SemSeg) localization component of a vehicle may use semantically segmented images to detect objects in an environment of a frequently occurring classification, such as lane markers, parking space markers and/or meters, mile markers, railing posts, structural columns, light fixtures, and so forth. Objects of an object type (or classification) that are frequently occurring may be leveraged by the SemSeg localization component to detect features (e.g., a corner, side, center, etc.) of the objects in the images. Further, the associated features of the objects may be associated with a map of an environment, indicating a known location of the feature (which may be referred to herein as a landmark). The measured position (e.g., the detected feature in the image) may then be used to localize the vehicle based on differences between the landmark and the measured position of the feature, in addition to any previous estimate of position based on any one or more additional sensor modalities. In this way, an accurate location of the vehicle can be determined simply with images captured by a sensor of the vehicle such as a camera and stored map data, without having to access a GPS system and/or rely on landmarks that may not be visible to the vehicle or may occur too infrequently in a variety of scenarios (e. g., tunnels, overpasses, etc.) to localize the vehicle).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Adams in view of Zhang et al. (US 20200192388 A1), hereinafter Zhang.
Regarding claim 8, Adams in view of Zhang teaches:
wherein the controller is further configured to change to the second position by zigzagging (Zhang: [0316], In one implementation, planning is implemented using a plurality of state machines. A representative architecture includes three layers, comprising a Motion commander level, a Robot commander level and a Planner level. These state machines are configured to issue a command (s) once the state is changed. In our example, the Motion commander is the low-level robot motion controller. It controls the robot go forward, rotate and wall-follow. The Robot Commander is the mid-level robot commander. It controls the robot do some modular motions including zigzag moves, waypoint moves, enter unknown space, etc.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Zhang into the invention of Adams to include various path plans as Zhang discloses with a reasonable expectation of success. One would be motivated to incorporate aspects of the cited prior art to create a more robust system that can vary exploration paths when encountering unexpected obstacles in unknown areas. Additionally, the claimed invention is merely a combination of old, well-known elements of an autonomous system performing localization when position signals (GPS) are unreliable as disclosed by Adams and varying exploration paths as taught by Zhang. The combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable.
Regarding claim 9, Adams in view of Zhang teaches:
wherein the controller is further configured to change to the second position by rotating (Fig. 24; [0179], At step 1220, information is received from the IMU that the mobile unit has moved to a new position. The movement of the mobile unit can be described by a rotational portion of the movement RIMU and a translational portion of the movement tIMU·).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Zhang into the invention of Adams to include various path plans as Zhang discloses with a reasonable expectation of success. One would be motivated to incorporate aspects of the cited prior art to create a more robust system that can vary exploration paths when encountering unexpected obstacles in unknown areas. Additionally, the claimed invention is merely a combination of old, well-known elements of an autonomous system performing localization when position signals (GPS) are unreliable as disclosed by Adams and varying exploration paths as taught by Zhang. The combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable.
Regarding claim 10, Adams in view of Zhang teaches:
wherein the controller is further configured to change to the second position by circumnavigating an object or the shadowed area ([0310], The monocular-auxiliary sensor equipped robot 1925 can build a descriptive point cloud 1945 of the obstacles in room 1900 enabling the robot 1925 to circumnavigate obstacles and self-localize within room 1900. Monocular- auxiliary sensor creates, updates, and refines descriptive point cloud 1945 using feature descriptors determined for room features indicated by points 1901, 1911, 1941, 1951, 1922 using the technology disclosed herein above under the Mapping sections).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Zhang into the invention of Adams to include various path plans as Zhang discloses with a reasonable expectation of success. One would be motivated to incorporate aspects of the cited prior art to create a more robust system that can vary exploration paths when encountering unexpected obstacles in unknown areas. Additionally, the claimed invention is merely a combination of old, well-known elements of an autonomous system performing localization when position signals (GPS) are unreliable as disclosed by Adams and varying exploration paths as taught by Zhang. The combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the results of the combination would have been predictable.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IZCALLI ANDRE RIOS-AGUIRRE whose telephone number is (571)272-0790. The examiner can normally be reached Monday through Friday 8:30 - 17:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached at (571) 270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.A.R./ Examiner, Art Unit 3666
/SCOTT A BROWNE/ Supervisory Patent Examiner, Art Unit 3666