DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments, see the section titled “Rejections under 35 U.S.C. 103” starting on page 9 of the reply filed 01/05/2026, have been fully considered but they are not persuasive.
For example, Applicant argues that the combination of Siessegger, Tsukada, and Baalke does not describe the content of claim 1 in its entirety. However, the Examiner disagrees.
It is the Examiner’s opinion that Siessegger discloses the use of the method in a roadway environment (In paragraph [0052], Siessegger discloses that area 10 is a hallway or parking garage or roadway (whether indoor or outdoor)). Furthermore, it is the Examiner’s opinion that Tsukada teaches where the vehicle travels according to a predetermined travel route and controls the vehicle to navigate along a modified travel route to remediate a detected deviation from the travel route (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example).
Therefore, the grounds of rejection are maintained. See the detailed rejections below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 9-14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Siessegger (US 2018/0350098 A1), in view of Tsukada (US 2022/0215669 A1) and Baalke (US 11,222,299 B1).
Regarding claim 1, Siessegger discloses a method to spatially position a vehicle in transit through a physical environment to supplement location information related to the physical environment that is obtained from a remote system, the method comprising:
detecting, by a sensor of a vehicle while the vehicle is travelling along a roadway within an outdoor physical environment, the vehicle being a roadway vehicle, a plurality of features including a first object adjacent to and along the roadway in the outdoor physical environment based on frames of sensor data of the outdoor physical environment acquired by the sensor (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire; see also paragraph [0052] where Siessegger discloses that area 10 is a hallway or parking garage or roadway (whether indoor or outdoor); see also paragraph [0059] in which Siessegger discloses that the sensor may be a global shutter camera; see also paragraph [0157] where Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described);
generating, by a local processor of the vehicle and based at least in part on the location information obtained from the remote system, a digital map of the outdoor physical environment, the digital map depicting the plurality of features and a location of the vehicle relative to the plurality of features (In paragraph [0093], Siessegger discloses that luminaire layout information may include, for example, maps, look-up tables, or database content that identity or otherwise indicate luminaire locations within the area, where a map, in some embodiments, is a virtual representation of an area for determining a location of a vehicle based on a luminaire identifier, such as a number, symbol, or text; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network);
detecting, by a local processor of the vehicle and based on the frames of sensor data, a first target having a visual encoding (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire);
decoding, by the local processor of the vehicle, the visual encoding into a first indication of location corresponding to the first object (In paragraph [0092], Siessegger discloses determining 508 a coordinate position of the luminaire based on the received image of the asymmetric fiducial pattern displayed by the luminaire, wherein the fiducial pattern can be decoded to retrieve luminaire position data (e.g., a luminaire identifier) to determine a vehicle location within the area, for example, the asymmetric light-based fiducial pattern is decoded using a fiducial pattern sequence to determine a luminaire identifier associated with a particular location within the area, for example a luminaire coordinate location; in paragraphs [0062-0063], Siessegger discloses that the receiver 208 includes a processor 216 and a memory 220 accessible by the processor 216, wherein memory 220 can be of any suitable type (e.g., RAM and/or ROM, or other suitable memory) and size, and in some cases may be implemented with volatile memory, non-volatile memory, or a combination thereof; see also paragraphs [0082-0087] where Siessegger discloses an embodiment wherein a system 400 may allow for communicative coupling with a network 404 and one or more servers or computer systems 408 including a processor 416 and a memory 420 accessible by the processor 416);
generating, by the local processor of the vehicle during movement of the vehicle along the roadway and based on the first indication of location, a location metric corresponding to the vehicle (In paragraphs [0094-0095], Siessegger discloses determining 512 an orientation of the vehicle relative to the area based on an orientation of the asymmetric fiducial pattern for the received image and determining 516 a position of the vehicle relative to the luminaire based at least in part on the determined coordinate position of the luminaire);
updating, by the local processor of the vehicle using the location metric, the location of the vehicle in the digital map (In paragraph [0093], Siessegger discloses that luminaire layout information may include, for example, maps, look-up tables, or database content that identity or otherwise indicate luminaire locations within the area, where a map, in some embodiments, is a virtual representation of an area for determining a location of a vehicle based on a luminaire identifier, such as a number, symbol, or text; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network, or where the updating luminaire location information occurs at a central processor, such as a server or a remote computing system, while modifications to relative position calculations may be performed locally by the vehicles); and
controlling, by the local processor of the vehicle, operation of the vehicle to navigate the vehicle (In paragraphs [0052-0053], Siessegger discloses that the vehicle 90 may be an autonomous vehicle capable of sensing its environment and navigating without human input wherein the vehicle 90 navigates the area 10 using positioning information received from one or more luminaires 100).
Although in paragraph [0139] Siessegger discloses that the luminaire positions may be communicated as a relative position (e.g., relative to another luminaire 1100, or some other object having a known position), and/or as an absolute position (e.g., x-y coordinates of a grid-based map), Siessegger does not explicitly disclose wherein the vehicle is travelling according to a predetermined travel route;
the digital map depicting the plurality of features and a location of the vehicle relative to the predetermined travel route;
wherein the first feature having a visual encoding is located on a surface of the first object,
the first indication of location including at least one of a latitude, a longitude, or an elevation;
detecting, by the local processor of the vehicle, a deviation of the updated location of the vehicle from an expected location along the predetermined travel route of the vehicle;
modifying, by the local processor of the vehicle, the predetermined travel route to remediate the deviation; and
controlling, by the local processor of the vehicle, operation of the vehicle to navigate the vehicle along the modified travel route.
However, Tsukada teaches wherein the vehicle is travelling according to a predetermined travel route (In paragraph [0047], Tsukada teaches that information indicating a traveling route from a departure point to a destination point in autonomous traveling is stored in the mobile robot 1 in advance);
the digital map depicting the plurality of features and a location of the vehicle relative to the predetermined travel route (In paragraph [0047], Tsukada teaches that information indicating a traveling route from a departure point to a destination point in autonomous traveling is stored in the mobile robot 1 in advance; in paragraph [0051], Tsukada teaches an environment map stored in advance that includes information on the surrounding environment such as the arrangement of objects and walls);
wherein the first feature having a visual encoding is located on a surface of the first object (In paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area, for example, a marker 2a is a filled circular marker, a marker 2b is an unfilled circular (annular) marker, a marker 2c is a circular (concentric) marker made up of two circles, and in this way, the appearances of the markers 2 are different from each other, and therefore the mobile robot 1 can identify which marker 2 appears in a captured image through image analysis and the mobile robot 1 can specify the self-position in the absolute coordinate system by referencing information that indicates the position of the specified marker 2),
detecting, by the local processor of the vehicle, a deviation of the updated location of the vehicle from an expected location along the predetermined travel route of the vehicle (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example);
modifying, by the local processor of the vehicle, the predetermined travel route to remediate the deviation (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example); and
controlling, by the local processor of the vehicle, operation of the vehicle to navigate the vehicle along the modified travel route (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example).
Tsukada is considered to be analogous to the claimed invention in that they both pertain to localization of a robot by the use of detecting a visual encoding disposed on a surface of objects in the environment, and controlling the vehicle to move along a predetermined route and perform route deviation correction. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the markers as taught by Tsukada in place of or in addition to the method as disclosed by Siessegger, where the visual markers of Tsukada, for example, do not need to be powered and can act as passive presentation of information, advantageously increasing the contextual utility of the visual encoding. Furthermore, allowing the vehicle to correct route deviations advantageously improves the operational accuracy of the vehicle, for example.
Although in paragraph [0051] Tsukuda teaches that the mobile robot 1 specifies the positions (absolute coordinates) of the markers 2 by comparing the positional relationship between the detected markers 2 with an environment map stored in advance, for example, and accordingly, the mobile robot 1 can specify the self-position in the absolute coordinate system, wherein the environment map is a map that includes information on the surrounding environment such as the arrangement of objects and walls, the combination of Siessegger and Tsukada does not explicitly disclose the first indication of location including at least one of a latitude, a longitude, or an elevation.
However, Baalke teaches generating, based at least in part on the location information obtained from the remote system, a digital map of the outdoor physical environment, the digital map depicting the plurality of features and a location of the vehicle relative to the plurality of features (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment; see also column 15 lines 24-42 where Baalke teaches a position sensor such as a GPS receiver for determining geolocations e.g., geospatially-referenced point that precisely defines an exact location in space with one or more geocodes, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation that may be ascertained from signals (e.g., trilateration data or information) or geographic information system (or “GIS”) data, of the autonomous vehicle 250-i); and
the first indication of location including at least one of a latitude, a longitude, or an elevation (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment).
Baalke is considered to be analogous to the claimed invention in that they both pertain to utilization of a map for navigation of a vehicle that includes positioning via a latitude, a longitude, and an elevation. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Baalke with the method as disclosed by the combination of Siessegger and Tsukada, where the Examiner understands that the use of a latitude, a longitude, and an elevation for positioning is well understood in the art, and may be implemented without undue experimentation, and with a reasonable expectation of success and predictable results. Doing so may be advantageous in that the relative position of the environment as well as the vehicle may be understood in a global context by use of latitude, longitude, and elevation, improving the amount of positional information provided and the contextual accuracy of navigation using the information, for example, as well as the ease of utilizing the positional information with other data sets in the same coordinate system.
Regarding claim 2, the combination of Siessegger, Tsukada, and Baalke further discloses detecting, by the sensor of the vehicle, a second object in the outdoor physical environment, the first object and the second object at least partially surrounding the vehicle in the outdoor physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 3, the combination of Siessegger, Tsukada, and Baalke further discloses detecting, based on the frames of the sensor data, a second target having the visual encoding and located at a surface of the second object (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire; in paragraph [0157], Siessegger discloses that the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described); and
decoding, by the processor of the vehicle, the visual encoding into a second indication of location corresponding to the second object (In paragraph [0092], Siessegger discloses determining 508 a coordinate position of the luminaire based on the received image of the asymmetric fiducial pattern displayed by the luminaire, wherein the fiducial pattern can be decoded to retrieve luminaire position data (e.g., a luminaire identifier) to determine a vehicle location within the area, for example, the asymmetric light-based fiducial pattern is decoded using a fiducial pattern sequence to determine a luminaire identifier associated with a particular location within the area, for example a luminaire coordinate location; in paragraph [0157], Siessegger discloses that the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described),
the second indication of location including at least one of a latitude, a longitude, or an elevation (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment).
Regarding claim 4, the combination of Siessegger, Tsukada, and Baalke further discloses generating, by the processor of the vehicle and based on the first indication of location and the second indication of location, the location metric during movement of the vehicle through the outdoor physical environment while at least partially surrounded by the first object and the second object (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 5, the combination of Siessegger, Tsukada, and Baalke further discloses generating, by the processor of the vehicle and based on the first indication of location and the second indication of location, an orientation metric corresponding to a geometric feature including the first object and the second object in the physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described); and
modifying, by the local processor of the vehicle based on the orientation metric, operation of the vehicle to orient the vehicle in the outdoor physical environment according to the orientation metric (In paragraph [0037], Siessegger teaches that vehicle orientation can be determined based vehicle positions relative to two or more known luminaire locations within the area, where a vehicle position within the area can be determined based on the relative position of the vehicle from a known luminaire location and a determined vehicle orientation relative to the area; in paragraphs [0052-0053], Siessegger discloses that the vehicle 90 may be an autonomous vehicle capable of sensing its environment and navigating without human input wherein the vehicle 90 navigates the area 10 using positioning information received from one or more luminaires 100; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described).
Regarding claim 9, Siessegger discloses a vehicle, comprising:
a sensor configured to detect, while the vehicle is travelling along a roadway within an outdoor physical environment, a plurality of features including a first object adjacent to and along the roadway and a first target having a visual encoding, based on frames of sensor data of the outdoor physical environment acquired by the sensor, the vehicle being a roadway vehicle (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire; see also paragraph [0052] where Siessegger discloses that area 10 is a hallway or parking garage or roadway (whether indoor or outdoor); see also paragraph [0059] in which Siessegger discloses that the sensor may be a global shutter camera; see also paragraph [0157] where Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described); and
a non-transitory memory and a processor programmed to spatially position the vehicle along the roadway to supplement location information related to the physical environment that is obtained from a remote system (In paragraphs [0062-0063], Siessegger discloses that the receiver 208 includes a processor 216 and a memory 220 accessible by the processor 216, wherein memory 220 can be of any suitable type (e.g., RAM and/or ROM, or other suitable memory) and size, and in some cases may be implemented with volatile memory, non-volatile memory, or a combination thereof; see also paragraphs [0082-0087] where Siessegger discloses an embodiment wherein a system 400 may allow for communicative coupling with a network 404 and one or more servers or computer systems 408 including a processor 416 and a memory 420 accessible by the processor 416; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network, or where the updating luminaire location information occurs at a central processor, such as a server or a remote computing system, while modifications to relative position calculations may be performed locally by the vehicles), by:
generating, based at least in part on the location information obtained from the remote system, a digital map of the outdoor physical environment, the digital map depicting the plurality of features and a location of the vehicle relative to the plurality of features (In paragraph [0093], Siessegger discloses that luminaire layout information may include, for example, maps, look-up tables, or database content that identity or otherwise indicate luminaire locations within the area, where a map, in some embodiments, is a virtual representation of an area for determining a location of a vehicle based on a luminaire identifier, such as a number, symbol, or text; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network, or where the updating luminaire location information occurs at a central processor, such as a server or a remote computing system, while modifications to relative position calculations may be performed locally by the vehicles);
decoding the visual encoding into a first indication of location corresponding to the first object (In paragraph [0092], Siessegger discloses determining 508 a coordinate position of the luminaire based on the received image of the asymmetric fiducial pattern displayed by the luminaire, wherein the fiducial pattern can be decoded to retrieve luminaire position data (e.g., a luminaire identifier) to determine a vehicle location within the area, for example, the asymmetric light-based fiducial pattern is decoded using a fiducial pattern sequence to determine a luminaire identifier associated with a particular location within the area, for example a luminaire coordinate location; in paragraphs [0062-0063], Siessegger discloses that the receiver 208 includes a processor 216 and a memory 220 accessible by the processor 216, wherein memory 220 can be of any suitable type (e.g., RAM and/or ROM, or other suitable memory) and size, and in some cases may be implemented with volatile memory, non-volatile memory, or a combination thereof; see also paragraphs [0082-0087] where Siessegger discloses an embodiment wherein a system 400 may allow for communicative coupling with a network 404 and one or more servers or computer systems 408 including a processor 416 and a memory 420 accessible by the processor 416);
generating, during movement of the vehicle along the roadway and based on the first indication of location, a location metric corresponding to the vehicle (In paragraphs [0094-0095], Siessegger discloses determining 512 an orientation of the vehicle relative to the area based on an orientation of the asymmetric fiducial pattern for the received image and determining 516 a position of the vehicle relative to the luminaire based at least in part on the determined coordinate position of the luminaire);
updating, using the location metric, the location of the vehicle in the digital map (In paragraph [0093], Siessegger discloses that luminaire layout information may include, for example, maps, look-up tables, or database content that identity or otherwise indicate luminaire locations within the area, where a map, in some embodiments, is a virtual representation of an area for determining a location of a vehicle based on a luminaire identifier, such as a number, symbol, or text; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network, or where the updating luminaire location information occurs at a central processor, such as a server or a remote computing system, while modifications to relative position calculations may be performed locally by the vehicles); and
controlling operation of the vehicle to navigate the vehicle through the outdoor physical environment (In paragraphs [0052-0053], Siessegger discloses that the vehicle 90 may be an autonomous vehicle capable of sensing its environment and navigating without human input wherein the vehicle 90 navigates the area 10 using positioning information received from one or more luminaires 100).
Although in paragraph [0139] Siessegger discloses that the luminaire positions may be communicated as a relative position (e.g., relative to another luminaire 1100, or some other object having a known position), and/or as an absolute position (e.g., x-y coordinates of a grid-based map), Siessegger does not explicitly disclose wherein the vehicle is travelling according to a predetermined travel route;
the digital map depicting the plurality of features and a location of the vehicle relative to the predetermined travel route;
wherein the first target having a visual encoding is located on a surface of the first object,
the first indication of location including at least one of a latitude, a longitude, or an elevation;
detecting a deviation of the updated location of the vehicle from an expected location along the predetermined travel route of the vehicle;
modifying the predetermined travel route to remediate the deviation;
controlling operation of the vehicle to navigate the vehicle along the modified travel route.
However, Tsukada teaches wherein the vehicle is travelling according to a predetermined travel route (In paragraph [0047], Tsukada teaches that information indicating a traveling route from a departure point to a destination point in autonomous traveling is stored in the mobile robot 1 in advance);
the digital map depicting the plurality of features and a location of the vehicle relative to the predetermined travel route (In paragraph [0047], Tsukada teaches that information indicating a traveling route from a departure point to a destination point in autonomous traveling is stored in the mobile robot 1 in advance; in paragraph [0051], Tsukada teaches an environment map stored in advance that includes information on the surrounding environment such as the arrangement of objects and walls);
wherein the first target having a visual encoding is located on a surface of the first object (In paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area, for example, a marker 2a is a filled circular marker, a marker 2b is an unfilled circular (annular) marker, a marker 2c is a circular (concentric) marker made up of two circles, and in this way, the appearances of the markers 2 are different from each other, and therefore the mobile robot 1 can identify which marker 2 appears in a captured image through image analysis and the mobile robot 1 can specify the self-position in the absolute coordinate system by referencing information that indicates the position of the specified marker 2),
detecting a deviation of the updated location of the vehicle from an expected location along the predetermined travel route of the vehicle (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example);
modifying the predetermined travel route to remediate the deviation (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example); and
controlling operation of the vehicle to navigate the vehicle along the modified travel route (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example).
Tsukada is considered to be analogous to the claimed invention in that they both pertain to localization of a robot by the use of detecting a visual encoding disposed on a surface of objects in the environment, and controlling the vehicle to move along a predetermined route and perform route deviation correction. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the markers as taught by Tsukada in place of or in addition to the vehicle as disclosed by Siessegger, where the visual markers of Tsukada, for example, do not need to be powered and can act as passive presentation of information, advantageously increasing the contextual utility of the visual encoding. Furthermore, allowing the vehicle to correct route deviations advantageously improves the operational accuracy of the vehicle, for example.
Although in paragraph [0051] Tsukuda teaches that the mobile robot 1 specifies the positions (absolute coordinates) of the markers 2 by comparing the positional relationship between the detected markers 2 with an environment map stored in advance, for example, and accordingly, the mobile robot 1 can specify the self-position in the absolute coordinate system, wherein the environment map is a map that includes information on the surrounding environment such as the arrangement of objects and walls, the combination of Siessegger and Tsukada does not explicitly disclose the first indication of location including at least one of a latitude, a longitude, or an elevation.
However, Baalke teaches generating, based at least in part on the location information obtained from the remote system, a digital map of the outdoor physical environment, the digital map depicting the plurality of features and a location of the vehicle relative to the plurality of features (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment; see also column 15 lines 24-42 where Baalke teaches a position sensor such as a GPS receiver for determining geolocations e.g., geospatially-referenced point that precisely defines an exact location in space with one or more geocodes, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation that may be ascertained from signals (e.g., trilateration data or information) or geographic information system (or “GIS”) data, of the autonomous vehicle 250-i); and
the first indication of location including at least one of a latitude, a longitude, or an elevation (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment).
Baalke is considered to be analogous to the claimed invention in that they both pertain to utilization of a map for navigation of a vehicle that includes positioning via a latitude, a longitude, and an elevation. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Baalke with the vehicle as disclosed by the combination of Siessegger and Tsukada, where the Examiner understands that the use of a latitude, a longitude, and an elevation for positioning is well understood in the art, and may be implemented without undue experimentation, and with a reasonable expectation of success and predictable results. Doing so may be advantageous in that the relative position of the environment as well as the vehicle may be understood in a global context by use of latitude, longitude, and elevation, improving the amount of positional information provided and the contextual accuracy of navigation using the information, for example, as well as the ease of utilizing the positional information with other data sets in the same coordinate system.
Regarding claim 10, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the sensor is further configured to detect a second object in the outdoor physical environment, the first object and the second object at least partially surrounding the vehicle in the outdoor physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 11, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the sensor is further configured to detect a second target having the visual encoding and located on a surface of the second object (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire; in paragraph [0157], Siessegger discloses that the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described); and
wherein the processor is further programmed to decode the visual encoding into a second indication of location corresponding to the second object (In paragraph [0092], Siessegger discloses determining 508 a coordinate position of the luminaire based on the received image of the asymmetric fiducial pattern displayed by the luminaire, wherein the fiducial pattern can be decoded to retrieve luminaire position data (e.g., a luminaire identifier) to determine a vehicle location within the area, for example, the asymmetric light-based fiducial pattern is decoded using a fiducial pattern sequence to determine a luminaire identifier associated with a particular location within the area, for example a luminaire coordinate location; in paragraph [0157], Siessegger discloses that the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described),
the second indication of location including at least one of a latitude, a longitude, or an elevation (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment).
Regarding claim 12, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the processor is further programmed to generate, based on the first indication of location and the second indication of location, the location metric during movement of the vehicle through the outdoor physical environment while at least partially surrounded by the first object and the second object (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 13, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the processor is further programmed to generate, based on the first indication of location and the second indication of location, an orientation metric corresponding to a geometric feature including the first object and the second object in the outdoor physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 14, Siessegger further discloses wherein the processor is further programmed to modify, based on the orientation metric, operation of the vehicle to orient the vehicle in the outdoor physical environment according to the orientation metric (In paragraphs [0052-0053], Siessegger discloses that the vehicle 90 may be an autonomous vehicle capable of sensing its environment and navigating without human input wherein the vehicle 90 navigates the area 10 using positioning information received from one or more luminaires 100).
Regarding claim 17, Siessegger discloses a non-transitory computer readable medium including one or more instructions stored thereon and executable by a processor to:
receive a digital representation of a visual encoding from a first target detected based on frames of sensor data acquired by a sensor of a vehicle travelling along a roadway within an outdoor physical environment, the vehicle being a roadway vehicle, the first target being located in the outdoor physical environment adjacent to and along the roadway (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire; see also paragraph [0052] where Siessegger discloses that area 10 is a hallway or parking garage or roadway (whether indoor or outdoor); see also paragraph [0059] in which Siessegger discloses that the sensor may be a global shutter camera);
generate, based at least in part on location information obtained from a remote system, a digital map of the outdoor physical environment, the digital map depicting a plurality of features including the first object and a location of the vehicle relative to the plurality of features (In paragraph [0093], Siessegger discloses that luminaire layout information may include, for example, maps, look-up tables, or database content that identity or otherwise indicate luminaire locations within the area, where a map, in some embodiments, is a virtual representation of an area for determining a location of a vehicle based on a luminaire identifier, such as a number, symbol, or text; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network, or where the updating luminaire location information occurs at a central processor, such as a server or a remote computing system, while modifications to relative position calculations may be performed locally by the vehicles);
decode, by the processor, the visual encoding into a first indication of location corresponding to the first object (In paragraph [0089], Siessegger discloses receiving 504 an image of an asymmetric fiducial pattern displayed by a luminaire within the area, whereby the vehicle includes a sensor configured to process an image of a light-based asymmetric fiducial pattern displayed by a luminaire, and as the luminaire enters the FOV of the sensor disposed on the vehicle, the sensor receives an image of the asymmetric fiducial pattern and transmits the image to a processor, operatively coupled thereto, and the processor, in turn, is configured to analyze the asymmetric fiducial pattern to determine a relative position of the vehicle from the luminaire; in paragraph [0092], Siessegger discloses determining 508 a coordinate position of the luminaire based on the received image of the asymmetric fiducial pattern displayed by the luminaire, wherein the fiducial pattern can be decoded to retrieve luminaire position data (e.g., a luminaire identifier) to determine a vehicle location within the area, for example, the asymmetric light-based fiducial pattern is decoded using a fiducial pattern sequence to determine a luminaire identifier associated with a particular location within the area, for example a luminaire coordinate location; in paragraphs [0062-0063], Siessegger discloses that the receiver 208 includes a processor 216 and a memory 220 accessible by the processor 216, wherein memory 220 can be of any suitable type (e.g., RAM and/or ROM, or other suitable memory) and size, and in some cases may be implemented with volatile memory, non-volatile memory, or a combination thereof; see also paragraphs [0082-0087] where Siessegger discloses an embodiment wherein a system 400 may allow for communicative coupling with a network 404 and one or more servers or computer systems 408 including a processor 416 and a memory 420 accessible by the processor 416);
generate, by the processor during movement of the vehicle along the roadway and based on the first indication of location, a location metric corresponding to the vehicle (In paragraphs [0094-0095], Siessegger discloses determining 512 an orientation of the vehicle relative to the area based on an orientation of the asymmetric fiducial pattern for the received image and determining 516 a position of the vehicle relative to the luminaire based at least in part on the determined coordinate position of the luminaire);
update, by the processor and using the location metric, the location of the vehicle in the digital map (In paragraph [0093], Siessegger discloses that luminaire layout information may include, for example, maps, look-up tables, or database content that identity or otherwise indicate luminaire locations within the area, where a map, in some embodiments, is a virtual representation of an area for determining a location of a vehicle based on a luminaire identifier, such as a number, symbol, or text; in paragraph [0104], Siessegger discloses that sensors, such as accelerometers and gyroscopes, are configured to measure luminaire movement, where these measurements can be transmitted via wired or wireless networks to a remote computing system and/or vehicle, and using these measurements, the system can update the luminaire location information and/or modify relative position calculations, the vehicle, in some embodiments, is configured to update and/or modify information without additional instructions and/or commands from the network, or where the updating luminaire location information occurs at a central processor, such as a server or a remote computing system, while modifications to relative position calculations may be performed locally by the vehicles); and
control operation of the vehicle to navigate the vehicle (In paragraphs [0052-0053], Siessegger discloses that the vehicle 90 may be an autonomous vehicle capable of sensing its environment and navigating without human input wherein the vehicle 90 navigates the area 10 using positioning information received from one or more luminaires 100).
Although in paragraph [0139] Siessegger discloses that the luminaire positions may be communicated as a relative position (e.g., relative to another luminaire 1100, or some other object having a known position), and/or as an absolute position (e.g., x-y coordinates of a grid-based map), Siessegger does not explicitly disclose wherein the vehicle is travelling according to a predetermined travel route;
the digital map depicting the plurality of features and a location of the vehicle relative to the predetermined travel route;
wherein the first target is located on a surface of the first object,
the first indication of location including at least one of a latitude, a longitude, or an elevation;
detecting, by the processor, a deviation of the updated location of the vehicle from an expected location along the predetermined travel route of the vehicle;
modifying, by the processor, the predetermined travel route to remediate the deviation; and
controlling operation of the vehicle to navigate the vehicle along the modified travel route.
However, Tsukada teaches wherein the vehicle is travelling according to a predetermined travel route (In paragraph [0047], Tsukada teaches that information indicating a traveling route from a departure point to a destination point in autonomous traveling is stored in the mobile robot 1 in advance);
the digital map depicting the plurality of features and a location of the vehicle relative to the predetermined travel route (In paragraph [0047], Tsukada teaches that information indicating a traveling route from a departure point to a destination point in autonomous traveling is stored in the mobile robot 1 in advance; in paragraph [0051], Tsukada teaches an environment map stored in advance that includes information on the surrounding environment such as the arrangement of objects and walls);
wherein the first target is located on a surface of the first object (In paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area, for example, a marker 2a is a filled circular marker, a marker 2b is an unfilled circular (annular) marker, a marker 2c is a circular (concentric) marker made up of two circles, and in this way, the appearances of the markers 2 are different from each other, and therefore the mobile robot 1 can identify which marker 2 appears in a captured image through image analysis and the mobile robot 1 can specify the self-position in the absolute coordinate system by referencing information that indicates the position of the specified marker 2),
detecting, by the processor, a deviation of the updated location of the vehicle from an expected location along the predetermined travel route of the vehicle (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example);
modifying, by the processor, the predetermined travel route to remediate the deviation (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example); and
controlling operation of the vehicle to navigate the vehicle along the modified travel route (In paragraphs [0108-0109], Tsukada teaches that by disposing the markers 2 at appropriate intervals along the traveling route, the mobile robot 1 can perform autonomous traveling using the marker 2 as clues, and in this case, the mobile robot 1 travels while successively recognizing the disposed markers 2 and measures deviation between the mobile robot 1 and the traveling route in the vicinity of the markers 2, then, if the deviation exceeds a set range, the mobile robot 1 may correct the self-position or correct the route by finely adjusting the traveling direction, for example).
Tsukada is considered to be analogous to the claimed invention in that they both pertain to localization of a robot by the use of detecting a visual encoding disposed on a surface of objects in the environment, and controlling the vehicle to move along a predetermined route and perform route deviation correction. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the markers as taught by Tsukada in place of or in addition to the non-transitory computer readable medium as disclosed by Siessegger, where the visual markers of Tsukada, for example, do not need to be powered and can act as passive presentation of information, advantageously increasing the contextual utility of the visual encoding. Furthermore, allowing the vehicle to correct route deviations advantageously improves the operational accuracy of the vehicle, for example.
Although in paragraph [0051] Tsukuda teaches that the mobile robot 1 specifies the positions (absolute coordinates) of the markers 2 by comparing the positional relationship between the detected markers 2 with an environment map stored in advance, for example, and accordingly, the mobile robot 1 can specify the self-position in the absolute coordinate system, wherein the environment map is a map that includes information on the surrounding environment such as the arrangement of objects and walls, the combination of Siessegger and Tsukada does not explicitly disclose the first indication of location including at least one of a latitude, a longitude, or an elevation.
However, Baalke teaches generating, based at least in part on the location information obtained from a remote system, a digital map of the outdoor physical environment, the digital map depicting a plurality of features and a location of the vehicle relative to the plurality of features (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment; see also column 15 lines 24-42 where Baalke teaches a position sensor such as a GPS receiver for determining geolocations e.g., geospatially-referenced point that precisely defines an exact location in space with one or more geocodes, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation that may be ascertained from signals (e.g., trilateration data or information) or geographic information system (or “GIS”) data, of the autonomous vehicle 250-i); and
the first indication of location including at least one of a latitude, a longitude, or an elevation (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment).
Baalke is considered to be analogous to the claimed invention in that they both pertain to utilization of a map for navigation of a vehicle that includes positioning via a latitude, a longitude, and an elevation. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Baalke with the non-transitory computer readable medium as disclosed by the combination of Siessegger and Tsukada, where the Examiner understands that the use of a latitude, a longitude, and an elevation for positioning is well understood in the art, and may be implemented without undue experimentation, and with a reasonable expectation of success and predictable results. Doing so may be advantageous in that the relative position of the environment as well as the vehicle may be understood in a global context by use of latitude, longitude, and elevation, improving the amount of positional information provided and the contextual accuracy of navigation using the information, for example, as well as the ease of utilizing the positional information with other data sets in the same coordinate system.
Regarding claim 18, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the computer readable medium further includes one or more instructions executable by the processor to:
receive a digital representation of a visual encoding from a second target detected based on frames of sensor data acquired by the sensor, the second feature being located on a surface of a second object located in the outdoor physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area);
decode, by the processor, the visual encoding into a second indication of location corresponding to the second object (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area),
the second indication of location including at least one of a latitude, a longitude, or an elevation (In column 2 lines 31-61, Baalke teaches a navigation map that comprises a plurality of geolocations in two-dimensional space, such as a set of geographic coordinates, e.g., a latitude and a longitude, and, optionally, an elevation, corresponding to the composition and surface features within the area or environment),
the first object and the second object at least partially surrounding the vehicle in the physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 19, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the computer readable medium further includes one or more instructions executable by the processor to:
generate, by the processor and based on the first indication of location and the second indication of location, the location metric during movement of the vehicle through the outdoor physical environment while at least partially surrounded by the first object and the second object (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Regarding claim 20, the combination of Siessegger, Tsukada, and Baalke further discloses wherein the computer readable medium further includes one or more instructions executable by the processor to:
generate, by the processor and based on the first indication of location and the second indication of location, an orientation metric corresponding to a geometric feature including the first object and the second object in the outdoor physical environment (In paragraph [0114], Siessegger discloses that the system is configured to process multiple images of luminaires within the area to determine multiple luminaire locations and relative vehicle distances therefrom; in paragraph [0157], Siessegger discloses that in some embodiments of the present disclosure, the system is configured to identify multiple luminaires from a single received image, wherein multiple luminaires are within the FOV of the sensor, such that received image includes multiple luminaires, and with multiple luminaires identified within a signal image, the system is configured to determine luminaire locations for each luminaire shown in the image and the determined luminaire locations are used to determine a vehicle position and orientation relative to the area, as previously described; in paragraph [0048-0049], Tsukada teaches that the mobile robot 1 is provided with an optical sensor such as a camera, where the mobile robot 1 can acquire images of the surroundings at its current position, attempt to detect the area where the markers 2 appear in the acquired images, and if the mobile robot 1 detects an area where a marker 2 appears, it is possible to specify which marker 2 the detected marker 2 is by performing image analysis on the detected area).
Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Siessegger (US 2018/0350098 A1), Tsukada (US 2022/0215669 A1), and Baalke (US 11,222,299 B1), in view of Hari (US 2023/0112004 A1).
Regarding claim 7, although in paragraph [0109] Siessegger discloses that the image capturing device may have a frame rate of 240 frames-per-second, the combination of Siessegger, Tsukada, and Baalke does not explicitly disclose wherein the detecting the first target comprises recognizing the first target within the frames of sensor data at a frequency greater than 1 Hz.
However, Hari teaches wherein the detecting the first target comprises recognizing the first target within the frames of sensor data at a frequency greater than 1 Hz (In paragraphs [0075-0076], Hari teaches determining if the current frame processing rate of the autonomous vehicle hardware is less than the tolerable latency 430 at each timestep, where an FPS perception 450 of the autonomous vehicle system is the FPS that the hardware of the system can process the camera images, where for example the minimum sensor processing rate may be one FPS for the side cameras and six FPS for the front camera while there are excess hardware resources; see also paragraphs [0059-0064] where Hari teaches that the image processing includes for example object recgonition).
Hari is considered to be analogous to the claimed invention in that they both pertain to performing image recognition at a frequency greater than 1 Hz. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Hari with the method as disclosed by the combination of Siessegger, Tsukada, and Baalke, where “if there are more detected objects in the front of the autonomous vehicle, and there are no detected objects on the side, and the minimum sensor processing rate we predict is one FPS for the side cameras and six FPS for the front camera while there are excess hardware resources, it may be preferable to increase the resources for the front cameras at a normalize frame processing rate while keeping the side camera processing hardware resources low until some future time” as suggested by Hari in paragraph [0076]. Doing so may be advantageous in that relevant calculations may be prioritized for navigating the vehicle, while maintaining a rate of processing that allows operation of the vehicle to appropriately react to the environment as it is detected, for example.
Regarding claim 15, although in paragraph [0109] Siessegger discloses that the image capturing device may have a frame rate of 240 frames-per-second, the combination of Siessegger, Tsukada, and Baalke does not explicitly disclose wherein the sensor is further configured to detect the first target by recognizing the first target within the frames of sensor data at a frequency greater than 1 Hz.
However, Hari teaches wherein the sensor is further configured to detect the first target by recognizing the first target within the frames of sensor data at a frequency greater than 1 Hz (In paragraphs [0075-0076], Hari teaches determining if the current frame processing rate of the autonomous vehicle hardware is less than the tolerable latency 430 at each timestep, where an FPS perception 450 of the autonomous vehicle system is the FPS that the hardware of the system can process the camera images, where for example the minimum sensor processing rate may be one FPS for the side cameras and six FPS for the front camera while there are excess hardware resources; see also paragraphs [0059-0064] where Hari teaches that the image processing includes for example object recgonition).
Hari is considered to be analogous to the claimed invention in that they both pertain to performing image recognition at a frequency greater than 1 Hz. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Hari with the vehicle as disclosed by the combination of Siessegger, Tsukada, and Baalke, where “if there are more detected objects in the front of the autonomous vehicle, and there are no detected objects on the side, and the minimum sensor processing rate we predict is one FPS for the side cameras and six FPS for the front camera while there are excess hardware resources, it may be preferable to increase the resources for the front cameras at a normalize frame processing rate while keeping the side camera processing hardware resources low until some future time” as suggested by Hari in paragraph [0076]. Doing so may be advantageous in that relevant calculations may be prioritized for navigating the vehicle, while maintaining a rate of processing that allows operation of the vehicle to appropriately react to the environment as it is detected, for example.
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Siessegger (US 2018/0350098 A1), Tsukada (US 2022/0215669 A1), and Baalke (US 11,222,299 B1), in view of Hayashi (US 2020/0250411 A1).
Regarding claim 8, the combination of Siessegger, Tsukada, and Baalke does not explicitly disclose modifying a portion of image data or video data acquired by the sensor including the detected first target based on a magnitude of velocity of the automated vehicle.
However, Hayashi teaches modifying a portion of image data or video data acquired by the sensor including the detected first target based on a magnitude of velocity of the automated vehicle (In paragraph [0091], Hayashi teaches an image correction unit 126 that executes correction processing for reducing blurring of the subject in the image acquired by the image acquisition unit 110 based on the relative velocity between the vehicle and the subject).
Hayashi is considered to be analogous to the claimed invention in that they both pertain to modifying image data based on the velocity of the vehicle. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Hayashi with the method as disclosed by the combination of Siessegger, Tsukada, and Baalke, where doing reduces blurring caused by the depth of field of the imaging apparatus 30 and so-called motion blur, as suggested by Hayashi in paragraph [0091], improving the accuracy of the information portrayed by the image data and thereby improving accuracy of determinations that utilize the image data such as object recognition, for example.
Regarding claim 16, the combination of Siessegger, Tsukada, and Baalke does not explicitly disclose wherein the processor is further configured to modify a portion of image data or video data including the detected first target based on a magnitude of velocity of the automated vehicle.
However, Hayashi teaches wherein the processor is further configured to modify a portion of image data or video data including the detected first target based on a magnitude of velocity of the automated vehicle (In paragraph [0091], Hayashi teaches an image correction unit 126 that executes correction processing for reducing blurring of the subject in the image acquired by the image acquisition unit 110 based on the relative velocity between the vehicle and the subject).
Hayashi is considered to be analogous to the claimed invention in that they both pertain to modifying image data based on the velocity of the vehicle. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Hayashi with the vehicle as disclosed by the combination of Siessegger, Tsukada, and Baalke, where doing reduces blurring caused by the depth of field of the imaging apparatus 30 and so-called motion blur, as suggested by Hayashi in paragraph [0091], improving the accuracy of the information portrayed by the image data and thereby improving accuracy of determinations that utilize the image data such as object recognition, for example.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Siessegger (US 2018/0350098 A1), Tsukada (US 2022/0215669 A1), and Baalke (US 11,222,299 B1), in view of Holz (US 2018/0180421 A1).
Regarding claim 21, the combination of Siessegger, Tsukada, and Baalke does not explicitly disclose determining, by the processor of the vehicle, a deviation between the second indication of location and the first indication of location exceeds an outlier threshold; and
deselecting the second indication of location from further determinations of the location of the vehicle.
However, Holz teaches determining, by the processor of the vehicle, a deviation between the second indication of location and the first indication of location exceeds an outlier threshold (In paragraph [0030], Holz teaches that the distances between the transformed candidate landmarks and neighboring mapped landmarks (e.g., closest mapped landmarks) may be compared to an inlier threshold distance, where transformed candidate landmarks with distances to neighboring mapped landmarks that are less than or equal to the inlier threshold distance may be referred to as “inliers” and transformed candidate landmarks with distances to neighboring mapped landmarks that are greater than the threshold value may be referred to as “outliers,” where an inlier may indicate that the transformed subset accurately aligned the associate candidate landmark with a neighboring mapped landmark, while an outlier may indicate the opposite); and
deselecting the second indication of location from further determinations of the location of the vehicle (In paragraph [0028], Holz teaches simultaneously estimating the pose of the robot while filtering out false detections, and to this end, detected landmarks may be treated as candidate landmarks, and the candidate landmarks may be vetted to determine which of them correspond to mapped landmarks, and which correspond to false detections).
Holz is considered to be analogous to the claimed invention in that they both pertain to localizing the position of a vehicle and filtering out outlier detections. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Holz with the method as disclosed by the combination of Siessegger, Tsukada, and Baalke, where false detections may make it difficult to align the candidate landmarks with corresponding mapped landmarks, and thus may hinder accurate pose estimates of the robot, where vetting the candidate landmarks may advantageously achieve accurate pose estimates, as suggested by Holz in paragraph [0091], for example.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Harrison Heflin whose telephone number is (571)272-5629. The examiner can normally be reached Monday - Friday, 1:00PM - 10:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at 571-272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HARRISON HEFLIN/ Examiner, Art Unit 3665
/HUNTER B LONSBERRY/ Supervisory Patent Examiner, Art Unit 3665