DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/24/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 16, 26-27, and 29 are objected to because of the following informalities:
In claim 16, “(ii) the determine target quantity of point groups” should be “(ii) the determined target quantity of point groups”
In claim 26, “(ii) the determine target quantity of point groups” should be “(ii) the determined target quantity of point groups”
In claim 27, “(ii) the determine target quantity of point groups” should be “(ii) the determined target quantity of point groups”
In claim 29, “(ii) the determine target quantity of point groups” should be “(ii) the determined target quantity of point groups”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 16-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 16, the claim recites several instances of the phrase “and/or”. However, the use of “and/or” in the claim renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”. Further, the use of “and/or” allows for the selection of options which render the claim indefinite. For example, if only option (a) is selected (i.e., “determining a target quantity of point groups visible from the specified viewing point in the surrounding area…”) but option (b) (i.e., “determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area…”) is excluded as being optional, limitation (ii) (i.e., “providing… the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.”) is rendered indefinite because the target quantity of point groups not visible from the specified viewing point is never determined. The opposite is also true; if only option (b) is selected, option (i) (i.e., “providing… the determined target quantity of point groups visible from the specified viewing point in the surrounding area”) is rendered indefinite because the target quantity of point groups visible from the specified viewing point is never determined. Further, claim 16 recites “and providing, for use for navigation and/or movement of the at least partially automated moving vehicle…” However, claim 16 already recites “determining a target quantity of point groups in a surrounding area for navigation for an at least partially automated moving vehicle”. Here, it is unclear whether the second invocation of “for navigation” is the same as the first invocation of “for navigation”, or if the navigation is meant to be the same.
Claims 17-25 are dependent upon claim 16 and therefore inherit the above-described deficiencies. Accordingly, claims 17-25 are rejected under similar reasoning as claim 16 above.
Regarding claim 17, the claim recites “the specified first and/or the second threshold value depends on a respective minimum or maximum spacing, corresponding to the respective minimum or maximum spacing multiplied by a respective factor.” However, use of the phrase “and/or” renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”. Further, the phrasing of the claim makes it unclear whether the limitation “corresponding to the respective minimum or maximum spacing multiplied by a respective factor” is meant to apply to “a respective minimum or maximum spacing” or to “the specified first and/or the second threshold value”. The former would be particularly unclear, it makes the definition of “a respective minimum or maximum spacing” self-referential. (i.e., the respective minimum or maximum spacing corresponding to the respective minimum or maximum spacing).
Regarding claim 19, the claim recites “wherein a dimension of the cells or spacings of the grid points in a polar direction and/or a dimension of the cells or spacings of the grid points in an azimuth direction...” However, use of the phrase “and/or” renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”.
Regarding claim 23, the claim recites “the surrounding area information includes image information and/or semantic information determined using an image sensor.” However, use of the phrase “and/or” renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”.
Claim 24 is dependent upon claim 23 and therefore inherits the above-described deficiencies. Accordingly, claim 24 is rejected under similar reasoning as claim 23 above.
Regarding claim 26, the claim recites several instances of the phrase “and/or”. However, the use of “and/or” in the claim renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”. Further, the use of “and/or” allows for the selection of options which render the claim indefinite. For example, if only option (a) is selected (i.e., “determining a target quantity of point groups visible from the specified viewing point in the surrounding area…”) but option (b) (i.e., “determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area…”) is excluded as being optional, limitation (ii) (i.e., “providing… the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.”) is rendered indefinite because the target quantity of point groups not visible from the specified viewing point is never determined. The opposite is also true; if only option (b) is selected, option (i) (i.e., “providing… the determined target quantity of point groups visible from the specified viewing point in the surrounding area”) is rendered indefinite because the target quantity of point groups visible from the specified viewing point is never determined. Further, claim 26 recites “and provide, for use for navigation and/or movement of the at least partially automated moving vehicle…” However, claim 26 already recites “determine a target quantity of point groups in a surrounding area for navigation for an at least partially automated moving vehicle”. Here, it is unclear whether the second invocation of “for navigation” is the same as the first invocation of “for navigation”, or if the navigation is meant to be the same.
Regarding claim 27, the claim recites several instances of the phrase “and/or”. However, the use of “and/or” in the claim renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”. Further, the use of “and/or” allows for the selection of options which render the claim indefinite. For example, if only option (a) is selected (i.e., “determining a target quantity of point groups visible from the specified viewing point in the surrounding area…”) but option (b) (i.e., “determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area…”) is excluded as being optional, limitation (ii) (i.e., “providing… the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.”) is rendered indefinite because the target quantity of point groups not visible from the specified viewing point is never determined. The opposite is also true; if only option (b) is selected, option (i) (i.e., “providing… the determined target quantity of point groups visible from the specified viewing point in the surrounding area”) is rendered indefinite because the target quantity of point groups visible from the specified viewing point is never determined.
Claim 28 is dependent upon claim 27 and therefore inherits the above-described deficiencies. Accordingly, claim 28 is rejected under similar reasoning as claim 27 above.
Regarding claim 29, the claim recites several instances of the phrase “and/or”. However, the use of “and/or” in the claim renders the claim indefinite, as it is unclear whether the presented limitations are optional (i.e., reliant upon the “or” operator) or required (i.e., reliant upon the “and” operator). For the purposes of this examination, “and/or” is being interpreted under broadest reasonable interpretation as “or”. Further, the use of “and/or” allows for the selection of options which render the claim indefinite. For example, if only option (a) is selected (i.e., “determining a target quantity of point groups visible from the specified viewing point in the surrounding area…”) but option (b) (i.e., “determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area…”) is excluded as being optional, limitation (ii) (i.e., “providing… the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.”) is rendered indefinite because the target quantity of point groups not visible from the specified viewing point is never determined. The opposite is also true; if only option (b) is selected, option (i) (i.e., “providing… the determined target quantity of point groups visible from the specified viewing point in the surrounding area”) is rendered indefinite because the target quantity of point groups visible from the specified viewing point is never determined. Further, claim 29 recites “and providing, for use for navigation and/or movement of the at least partially automated moving vehicle…” However, claim 29 already recites “determining a target quantity of point groups in a surrounding area for navigation for an at least partially automated moving vehicle”. Here, it is unclear whether the second invocation of “for navigation” is the same as the first invocation of “for navigation”, or if the navigation is meant to be the same.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 16-24 and 26-29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 16-24 and 26-29 are directed to determining a target quantity of point groups in a surrounding area, transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system, and determining a target quantity of point groups visible/not visible from a specified viewing point. Decision-making processes fall within a subject matter grouping of abstract ideas which the Courts have considered ineligible (mental processes or concepts performed in the human mind: i.e., an observation, evaluation, judgement, or opinion). The claims do not integrate the abstract idea into a practical application, and do not include additional elements that provide an inventive concept (are sufficient to amount to significantly more than the abstract idea).
Under step 1 of the Alice/Mayo framework, it must be considered whether the claims are directed to one of the four statutory classes of invention. In the instant case, claims 16-24 recite a method with at least one step. Claim 26 recites an apparatus for data processing. Claims 27-28 recite an apparatus. Claim 29 recites a computer-readable storage medium that is defined by the claim as non-transitory. Therefore, the claims are each directed to one of the four statutory categories of invention (process, apparatus, apparatus, manufacture).
Under step 2 of the Alice/Mayo framework, it must be considered whether the claims are “directed to” an abstract idea. That is, whether the claims recite an abstract idea and fail to integrate the abstract idea into a practical application.
Regarding independent claim 16, the claim sets forth the abstract idea of determining a target quantity of point groups in a surrounding area, transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system, and determining a target quantity of point groups visible/not visible from a specified viewing point in the following limitations:
determining a target quantity of point groups in a surrounding area for navigation
the method comprising the following steps: providing the quantity of point groups in the surrounding area with coordinates in an origin coordinate system;
transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system,
determining a spherically curved grid in spherical coordinates in the target coordinate system,
assigning each point group of at least some of the quantity to a cell in the grid or a grid point in the grid;
for each of at least some of the cells of the grid or for each of at least some of the grid points of the grid assigned to more than one point group: determining a minimum and/or maximum spacing based on spacings of the point groups from the origin of the target coordinate system;
performing: a) determining a target quantity of point groups visible from the specified viewing point in the surrounding area, including those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by at most a specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point is less than the maximum spacing minus a specified second threshold value, and including those point groups which are assigned to cells or grid points to which only one point group is assigned, and/or b) determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area, comprising those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by more than the specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point exceeds the maximum spacing minus the specified second threshold value;
The above-recited limitations establish an abstract decision-making process which utilizes mathematical concepts. A human being is capable of, mentally or with the assistance of pen and paper, determining a target quantity of point groups in a surrounding area for navigation (e.g., via visual observation and measurement/estimation), determining a target quantity of point groups visible from the specified viewing point in the surrounding area (e.g., via visual observation), assigning each point group of at least some of the quantity to a cell in the grid or a grid point in the grid (i.e., an abstract decision-making process), and determining a minimum and/or maximum spacing based on spacings of the point groups from the origin of the target coordinate system (i.e., an abstract decision-making process). The acts of providing the quantity of point groups in the surrounding area with coordinates in an origin coordinate system, transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system, and determining a spherically curved grid in spherical coordinates in the target coordinate system are mathematical concepts (i.e., converting between coordinate systems, transforming coordinates, determining a spherically curved grid in spherical coordinates). Such mathematical concepts have been identified as concepts falling within the grouping of abstract ideas (see MPEP 2106.04(a)(2)).
Claim 16 does recite additional elements:
…for an at least partially automated moving vehicle,
the point groups being visible or not visible from a specified viewing point in the surrounding area, from a quantity of point groups,
wherein each point group includes one or more points,
wherein the viewing point in the surrounding area lies in an origin of the target coordinate system;
wherein the grid includes grid points and/or cells;
and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.
These additional elements merely serve to embellish upon the particular technological environment of the claimed invention and provide further detail regarding the mathematical concepts discussed above (e.g., by providing further specificity with respect to the amount of points, where the viewing point lies, and what the grid includes). The claimed features directed towards providing, for use for navigation and/or movement of the at least partially automated moving vehicle likewise embellishes upon the particular technological environment but fails to positively recite navigating or controlling movement of the at least partially automated moving vehicle; information is merely being provided to the vehicle.
Accordingly, the Examiner concludes that the claim fails to integrate the abstract idea into a practical application, and is therefore “directed to” the abstract idea.
Under step 2B of the Alice/Mayo framework, it must finally be considered whether the claim includes any additional element or combination of elements that provide an inventive concept (i.e., whether the additional element or elements amount to significantly more than the abstract idea). In the instant case, the additional elements, considered both individually and as an ordered combination, merely generally link the use of the judicial exception to a particular technological environment and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see MPEP 2106.05(f)). The act of providing the determined quantity/quantities of point groups to the at least partially automated moving vehicle for use for navigation and/or movement amounts to reciting insignificant post-solution activity, as the information is merely claimed as being provided to the vehicle and neither controlled navigation nor controlled movement of the vehicle is positively recited (see MPEP 2106.05(g)). Accordingly, the Examiner asserts that the limitations do not provide an inventive concept, and the claim is ineligible for patent.
Independent claims 26, 27, and 29 are parallel in scope to claim 16 and are ineligible for similar reasons.
Regarding claim 17, which sets forth:
the specified first and/or the second threshold value depends on a respective minimum or maximum spacing, corresponding to the respective minimum or maximum spacing multiplied by a respective factor.
Such a recitation merely embellishes upon the abstract idea of determining a target quantity of point groups visible/not visible from the specified viewing point. The act of multiplying the respective minimum or maximum spacing by a respective factor is a mathematical concept falling within the grouping of abstract ideas (see MPEP 2106.04(a)(2)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 18, which sets forth:
the determining of the grid in spherical coordinates in the target coordinate system includes: determining or using the coordinates of the quantity of point groups in the origin coordinate system in spherical coordinates;
determining a frequency distribution of the point groups in a polar direction;
determining a dimension of the cells or of spacings of the grid points in the polar direction based on a number of local maxima of the frequency distribution;
and determining a dimension of the cells or the spacings of the grid points in an azimuth direction based on a number of point groups per local maximum.
Such a recitation merely embellishes upon the abstract idea of determining the grid in spherical coordinates in the target coordinate system. The acts of determining a frequency distribution, determining a dimension of the cells or spacing of the grid points based on a number of local maxima of the frequency distribution, and determining a dimension of the cells or the spacings of the grid points are mathematical concepts and abstract decision-making processes falling within the grouping of abstract ideas (see MPEP 2106.04(a)(2)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 19, which sets forth:
wherein a dimension of the cells or spacings of the grid points in a polar direction and/or a dimension of the cells or spacings of the grid points in an azimuth direction is determined based on a distance of the specified viewing point from an origin of the origin coordinate system.
Such a recitation merely embellishes upon the abstract idea of determining the grid in spherical coordinates in the target coordinate system. The act of determining a dimension of the cells or spacing of the grid points in an azimuth or polar direction is a mathematical concept falling within the grouping of abstract ideas (see MPEP 2106.04(a)(2)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 20, which sets forth:
the quantity of point groups in the surrounding area is determined using distance measurement via a lidar sensor.
Such a recitation merely embellishes upon the abstract idea of determining the quantity of point groups in the surrounding area by further specifying that the quantity of point groups is determined using a lidar sensor. Such a recitation merely generally links the use of the judicial exception to a particular technological environment and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see MPEP 2106.05(f)). A human being is capable of determining a quantity of point groups in the surrounding area using distance measurements obtained by a lidar sensor. As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 21, which sets forth:
a position of the lidar sensor lies in an origin of the origin coordinate system.
Such a recitation merely embellishes upon the technological environment of the claimed invention by specifying that a position of the lidar sensor lies in an origin of the origin coordinate system. Such a recitation merely generally links the use of the judicial exception to a particular technological environment and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see MPEP 2106.05(f)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 22, which sets forth:
providing surrounding area information with coordinates in the target coordinate system;
and associating at least some of the point groups of the target quantity of point groups which are visible from a specified viewing point in the surrounding area with the surrounding area information.
Such a recitation introduces the additional abstract ideas of providing surrounding area information with coordinates in the target coordinate system, and associating at least some of the point groups with the surrounding area information. Such an arrangement amounts to an abstract decision-making process capable of being performed mentally or with the assistance of pen and paper. Providing coordinates in a target coordinate system is a mathematical concept falling within the grouping of abstract ideas (see MPEP 2106.04(a)(2)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 23, which sets forth:
the surrounding area information includes image information and/or semantic information determined using an image sensor.
Such a recitation merely embellishes upon the technological environment of the claimed invention by specifying that the surrounding area information includes image information and/or semantic information determined using an image sensor. Such a recitation merely generally links the use of the judicial exception to a particular technological environment and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see MPEP 2106.05(f)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 24, which sets forth:
the surrounding area information is represented as seen from the specified viewing point,
and wherein a position of the image sensor lies in the origin of the target coordinate system.
Such a recitation merely embellishes upon the technological environment of the claimed invention by specifying that the surrounding area information is represented as seen from the specified viewing point, wherein a position of the image sensor lies in the origin of the target coordinate system. Such a recitation merely generally links the use of the judicial exception to a particular technological environment and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see MPEP 2106.05(f)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Regarding claim 25, the claim recites “determining control information for moving a device based on the surrounding area information and the point groups of the target quantity associated with the surrounding area information; and providing the control information and controlling the device based on the control information.” Here, control of the device based on control information based on the surrounding area information and the point groups of the target quantity associated with the surrounding area information. In other words, the claim incorporates the abstract idea(s) into the practical application of controlling the device based on the determined control information. Therefore, claim 25 is not rejected under 35 USC 101.
Regarding claim 28, which sets forth:
the device is: (i) an at least partially automated moving vehicle including a passenger transportation vehicle or a goods transportation vehicle, and/or (ii) a robot, and/or (iii) a drone.
Such a recitation merely embellishes upon the technological environment of the claimed invention by specifying that the device is a particular kind of vehicle (e.g., passenger transportation vehicle, goods transportation vehicle, robot, drone). Such a recitation merely generally links the use of the judicial exception to a particular technological environment and append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (see MPEP 2106.05(f)). As such, it does not integrate the abstract idea into a practical application, and does not provide an inventive concept. Accordingly, the claim does not confer eligibility on the claimed invention and is ineligible for similar reasons to claim 16.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 16 and 18-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Macrae (US 2019/0197711 A1) in view of Eade et al. (US 2019/0303457 A1), hereinafter Eade.
Regarding claim 16, Macrae teaches a method, comprising:
determining a target quantity of point groups in a surrounding area…
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
the point groups being visible or not visible from a specified viewing point in the surrounding area, from a quantity of point groups,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
wherein each point group includes one or more points,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
the method comprising the following steps: providing the quantity of point groups in the surrounding area with coordinates in an origin coordinate system;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position."
transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system,
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the viewing point in the surrounding area lies in an origin of the target coordinate system;
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
determining a spherically curved grid in spherical coordinates in the target coordinate system,
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the grid includes grid points and/or cells;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
assigning each point group of at least some of the quantity to a cell in the grid or a grid point in the grid;
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations." Macrae further teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae even further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
for each of at least some of the cells of the grid or for each of at least some of the grid points of the grid assigned to more than one point group: determining a minimum and/or maximum spacing based on spacings of the point groups from the origin of the target coordinate system;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
performing: a) determining a target quantity of point groups visible from the specified viewing point in the surrounding area, including those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by at most a specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point is less than the maximum spacing minus a specified second threshold value, and including those point groups which are assigned to cells or grid points to which only one point group is assigned, and/or b) determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area, comprising those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by more than the specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point exceeds the maximum spacing minus the specified second threshold value;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
However, Macrae does not outright teach determining point groups in a surrounding area for navigation for an at least partially automated moving vehicle, and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Eade teaches localization of an autonomous vehicle, comprising:
determining… point groups in a surrounding area for navigation for an at least partially automated moving vehicle,
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle." Eade even further teaches ([0062]): "In the illustrated implementation, autonomous control over vehicle 100 (which may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system 120, which may include one or more processors 122 and one or more memories 124, with each processor 122 configured to execute program code instructions 126 stored in a memory 124." FIG. 20, included below, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
PNG
media_image1.png
414
550
media_image1.png
Greyscale
and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle. " FIG. 20, included above, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae to incorporate the teachings of Eade to provide determining point groups in a surrounding area for navigation for an at least partially automated moving vehicle, and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 18, Macrae and Eade teach the aforementioned limitations of claim 16. Macrae further teaches:
the determining of the grid in spherical coordinates in the target coordinate system includes: determining or using the coordinates of the quantity of point groups in the origin coordinate system in spherical coordinates;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
determining a frequency distribution of the point groups in a polar direction;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate." The Examiner has interpreted the determination of whether an assigned bucket already contains a vertex as the determination of a frequency distribution of the point groups in a polar direction, particularly since the vertex closest to the camera (i.e., closest in the polar direction) is retained.
determining a dimension of the cells or of spacings of the grid points in the polar direction based on a number of local maxima of the frequency distribution;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate." The Examiner has interpreted the determination which vertex is nearest to the camera position as determining spacings of the grid points in the polar direction, wherein this determination is made in response to the determination that the assigned bucket already contains a vertex (i.e., based on a number of local maxima of the frequency distribution).
and determining a dimension of the cells or the spacings of the grid points in an azimuth direction based on a number of point groups per local maximum.
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
Regarding claim 19, Macrae and Eade teach the aforementioned limitations of claim 16. Macrae further teaches:
wherein a dimension of the cells or spacings of the grid points in a polar direction and/or a dimension of the cells or spacings of the grid points in an azimuth direction is determined based on a distance of the specified viewing point from an origin of the origin coordinate system.
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
Regarding claim 20, Macrae and Eade teach the aforementioned limitations of claim 16. However, Macrae does not outright teach that the quantity of point groups in the surrounding area is determined using distance measurement via a lidar sensor. Eade further teaches:
the quantity of point groups in the surrounding area is determined using distance measurement via a lidar sensor.
Eade teaches ([0087]): "Localization subsystem 152 is generally responsible for providing localization data suitable for localizing a vehicle within its environment. Localization subsystem 152, for example, may determine a map pose 200 (generally a position, and in some instances orientation and/or speed) of the autonomous vehicle within its surrounding environment. As will become more apparent below, map pose 200 may be determined in part through the use of LIDAR 136, as well as using mapping data provided by relative atlas system 160" Eade further teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade even further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle." Eade still further teaches ([0062]): "In the illustrated implementation, autonomous control over vehicle 100 (which may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system 120, which may include one or more processors 122 and one or more memories 124, with each processor 122 configured to execute program code instructions 126 stored in a memory 124."
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae and Eade to further incorporate the teachings of Eade to provide that the quantity of point groups in the surrounding area is determined using distance measurement via a lidar sensor. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 21, Macrae and Eade teach the aforementioned limitations of claim 20. Macrae further teaches:
a position of the [sensor] lies in an origin of the origin coordinate system.
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position."
However, Macrae does not outright teach that the sensor is a lidar sensor. Eade further teaches:
...the lidar sensor…
Eade teaches ([0087]): "Localization subsystem 152 is generally responsible for providing localization data suitable for localizing a vehicle within its environment. Localization subsystem 152, for example, may determine a map pose 200 (generally a position, and in some instances orientation and/or speed) of the autonomous vehicle within its surrounding environment. As will become more apparent below, map pose 200 may be determined in part through the use of LIDAR 136, as well as using mapping data provided by relative atlas system 160"
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae and Eade to further incorporate the teachings of Eade to provide that the sensor is a lidar sensor. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 22, Macrae and Eade teach the aforementioned limitations of claim 16. Macrae further teaches:
providing surrounding area information with coordinates in the target coordinate system;
Macrae teaches [0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
and associating at least some of the point groups of the target quantity of point groups which are visible from a specified viewing point in the surrounding area with the surrounding area information.
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
Regarding claim 23, Macrae and Eade teach the aforementioned limitations of claim 22. Macrae further teaches:
the surrounding area information includes image information and/or semantic information determined using an image sensor.
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
Regarding claim 24, Macrae and Eade teach the aforementioned limitations of claim 23. Macrae further teaches:
the surrounding area information is represented as seen from the specified viewing point,
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
and wherein a position of the image sensor lies in the origin of the target coordinate system.
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position."
Regarding claim 25, Macrae and Eade teach the aforementioned limitations of claim 22. However, Macrae does not outright teach determining control information for moving a device based on the surrounding area information and the point groups of the target quantity associated with the surrounding area information, and providing the control information and controlling the device based on the control information. Eade further teaches:
determining control information for moving a device based on the surrounding area information and the point groups of the target quantity associated with the surrounding area information;
Eade teaches ([0087]): "Localization subsystem 152 is generally responsible for providing localization data suitable for localizing a vehicle within its environment. Localization subsystem 152, for example, may determine a map pose 200 (generally a position, and in some instances orientation and/or speed) of the autonomous vehicle within its surrounding environment. As will become more apparent below, map pose 200 may be determined in part through the use of LIDAR 136, as well as using mapping data provided by relative atlas system 160" Eade further teaches ([0088]): "Localization data is provided by localization subsystem 152 to each of perception, planning and control subsystems 154, 156, 158. Perception subsystem 154, for example, is principally responsible for perceiving dynamic objects such as pedestrians and other vehicles within the environment, and may utilize LIDAR tracking functionality 208, RADAR tracking functionality 210 and camera tracking functionality 212 to identify and track dynamic objects using LIDAR 136, RADAR 134 and camera 138, respectively." Eade even further teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade still further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle."
and providing the control information and controlling the device based on the control information.
Eade teaches ([0089]): "Route planning functionality 216 may be used to plan a high level route for the vehicle based upon the static and dynamic objects in the immediate environment and the desired destination, while mid-level optimization functionality 218 may be used to make decisions based on traffic controls and likely motion of other actors in the scene, and both may utilize mapping data from relative atlas system 160 to perform their functions. Trajectory planning functionality 220 may generate a trajectory for the vehicle over some time frame (e.g., several seconds), which is then passed on to control subsystem 158 to convert the desired trajectory into trajectory commands 222 suitable for controlling the various vehicle controls 112-116 in control system 110, with localization data also provided to control subsystem 158 to enable the control subsystem to issue appropriate commands to implement the desired trajectory as the location of the vehicle changes over the time frame."
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae and Eade to further incorporate the teachings of Eade to provide determining control information for moving a device based on the surrounding area information and the point groups of the target quantity associated with the surrounding area information, and providing the control information and controlling the device based on the control information.. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 26, Macrae teaches a system, comprising:
...determine a target quantity of point groups in a surrounding area…
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
the point groups being visible or not visible from a specified viewing point in the surrounding area, from a quantity of point groups,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
wherein each point group includes one or more points,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
the system configured to: provide the quantity of point groups in the surrounding area with coordinates in an origin coordinate system;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position."
transform the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system,
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the viewing point in the surrounding area lies in an origin of the target coordinate system;
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
determine a spherically curved grid in spherical coordinates in the target coordinate system,
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the grid includes grid points and/or cells;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
assign each point group of at least some of the quantity to a cell in the grid or a grid point in the grid;
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations." Macrae further teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae even further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
for each of at least some of the cells of the grid or for each of at least some of the grid points of the grid assigned to more than one point group: determine a minimum and/or maximum spacing based on spacings of the point groups from the origin of the target coordinate system;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
perform: a) determining a target quantity of point groups visible from the specified viewing point in the surrounding area, including those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by at most a specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point is less than the maximum spacing minus a specified second threshold value, and including those point groups which are assigned to cells or grid points to which only one point group is assigned, and/or b) determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area, comprising those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by more than the specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point exceeds the maximum spacing minus the specified second threshold value;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
However, Macrae does not outright teach data processing configured to determine a target quantity of point groups in a surrounding area for navigation for an at least partially automated moving vehicle, and provide, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Eade teaches localization of an autonomous vehicle, comprising:
data processing configured to determine a target quantity of point groups in a surrounding area for navigation for an at least partially automated moving vehicle,
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle." Eade further teaches ([0062]): "In the illustrated implementation, autonomous control over vehicle 100 (which may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system 120, which may include one or more processors 122 and one or more memories 124, with each processor 122 configured to execute program code instructions 126 stored in a memory 124." FIG. 20, included above, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
and provide, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle. " FIG. 20, included above, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae to incorporate the teachings of Eade to provide data processing configured to determine a target quantity of point groups in a surrounding area for navigation for an at least partially automated moving vehicle, and provide, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 27, Macrae teaches a device, comprising:
provide a quantity of point groups in a surrounding area,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
[a quantity of point groups] with coordinates in an origin coordinate system;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position."
obtain a target quantity of point groups which are visible or not visible from a specified viewing point in the surrounding area,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
which has been determined from the quantity of points by: transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system,
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the viewing point in the surrounding area lies in an origin of the target coordinate system;
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
determining a spherically curved grid in spherical coordinates in the target coordinate system,
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the grid includes grid points and/or cells;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
assigning each point group of at least some of the quantity to a cell in the grid or a grid point in the grid;
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations." Macrae further teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae even further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
for each of at least some of the cells of the grid or for each of at least some of the grid points of the grid assigned to more than one point group: determining a minimum and/or maximum spacing based on spacings of the point groups from the origin of the target coordinate system;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
performing: a) determining a target quantity of point groups visible from the specified viewing point in the surrounding area, including those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by at most a specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point is less than the maximum spacing minus a specified second threshold value, and including those point groups which are assigned to cells or grid points to which only one point group is assigned, and/or b) determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area, comprising those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by more than the specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point exceeds the maximum spacing minus the specified second threshold value;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
However, Macrae does not outright teach providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Eade teaches localization of an autonomous vehicle, comprising:
and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle. " FIG. 20, included above, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae to incorporate the teachings of Eade to provide, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 28, Macrae and Eade teach the aforementioned limitations of claim 27. However, Macrae does not outright teach that the device is: (i) an at least partially automated moving vehicle including a passenger transportation vehicle or a goods transportation vehicle, and/or (ii) a robot, and/or (iii) a drone. Eade further teaches:
the device is: (i) an at least partially automated moving vehicle including a passenger transportation vehicle or a goods transportation vehicle, and/or (ii) a robot, and/or (iii) a drone.
Eade teaches ([0058]): "FIG. 1 illustrates an example autonomous vehicle 100 within which the various techniques disclosed herein may be implemented… Vehicle 100 may be implemented as any number of different types of vehicles, including vehicles capable of transporting people and/or cargo, and capable of traveling by land, by sea, by air, underground, undersea and/or in space, and it will be appreciated that the aforementioned components 102-116 can vary widely based upon the type of vehicle within which these components are utilized."
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae and Eade to further incorporate the teachings of Eade to provide that the device is: (i) an at least partially automated moving vehicle including a passenger transportation vehicle or a goods transportation vehicle, and/or (ii) a robot, and/or (iii) a drone. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Regarding claim 29, Macrae teaches a non-transitory computer-readable storage medium on which is stored a computer program (“a data carrier provided with program information for causing a computer to carry out the foregoing method”, [0048]) for:
determining a target quantity of point groups…
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
the point groups being visible or not visible from a specified viewing point in the surrounding area, from a quantity of point groups,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
wherein each point group includes one or more points,
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations."
the computer program, when executed by a computer, causing the computer to perform the following steps:
Macrae teaches ([0048]): "According to a fourth aspect of the present invention there is provided a data carrier provided with program information for causing a computer to carry out the foregoing method." Macrae further teaches ([0049]): "Embodiments the fourth aspect of the present invention may include one or more features of the first, or second aspects of the present invention or their embodiments. Similarly, embodiments of the first, second or third aspects of the present invention may include one or more features of the fourth aspect or its embodiment."
providing the quantity of point groups in the surrounding area with coordinates in an origin coordinate system;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position."
transforming the coordinates of the quantity of point groups into spherical coordinates in a target coordinate system,
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the viewing point in the surrounding area lies in an origin of the target coordinate system;
Macrae teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
determining a spherically curved grid in spherical coordinates in the target coordinate system,
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
wherein the grid includes grid points and/or cells;
Macrae teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
assigning each point group of at least some of the quantity to a cell in the grid or a grid point in the grid;
Macrae teaches ([0008]-[0013]): "According to first aspect of the present invention there is provided a method for recording spatial information, comprising: forming a point cloud representing objects within a given volume of space; obtaining at least one image from at least one given location within the given volume; determining those points in the point cloud which are visible from the given location and discarding points in the point cloud which are not visible from the given location; using the remaining point cloud data to determine the distance from the given location to locations on the surface of objects represented in the image; and determining three dimensional coordinates of said surface locations." Macrae further teaches ([0057]-[0061]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position." Macrae even further teaches ([0064]): "The vertices remaining after culling are projected to spherical space in a coordinate system which describes the location of a point in terms of radius, pan and tilt. The distance from the camera in three-dimensional (3D) space is also stored against each spherical coordinate."
for each of at least some of the cells of the grid or for each of at least some of the grid points of the grid assigned to more than one point group: determining a minimum and/or maximum spacing based on spacings of the point groups from the origin of the target coordinate system;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
performing: a) determining a target quantity of point groups visible from the specified viewing point in the surrounding area, including those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by at most a specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point is less than the maximum spacing minus a specified second threshold value, and including those point groups which are assigned to cells or grid points to which only one point group is assigned, and/or b) determining a target quantity of point groups which are not visible from the specified viewing point in the surrounding area, comprising those point groups whose spacing from the origin of the target coordinate system per cell or grid point exceeds the minimum spacing by more than the specified first threshold value, and/or whose spacing from the origin of the target coordinate system per cell or grid point exceeds the maximum spacing minus the specified second threshold value;
Macrae teaches ([0057]-[0062]): "For each camera position, and thus each spherical image, the point cloud data is reduced using a technique referred to herein as “occlusion culling”, which removes points which are not visible from the camera position. One form of an occlusion culling algorithm operates as follows: a. A number of “buckets” are created. For example, these may correspond to: 1. The number of pixels in the spherical image (for example, an image of dimensions 12880×6440 would result in 82,947,200 buckets) or a multiple or sub-multiple thereof 2. Portions of a degree of rotation in a sphere, e.g. every 0.5 degrees in both pan and tilt directions. b. Each vertex in the source point cloud is evaluated and assigned to a bucket based on the horizontal and vertical angles between the vertex and the camera position. c. If the assigned bucket already contains a vertex, the closest to the camera is retained and the further away is discarded."
However, Macrae does not outright teach determining point groups in a surrounding area for navigation for an at least partially automated moving vehicle, and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Eade teaches localization of an autonomous vehicle, comprising:
determining… point groups in a surrounding area for navigation for an at least partially automated moving vehicle,
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle." Eade even further teaches ([0062]): "In the illustrated implementation, autonomous control over vehicle 100 (which may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system 120, which may include one or more processors 122 and one or more memories 124, with each processor 122 configured to execute program code instructions 126 stored in a memory 124." FIG. 20, included above, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area.
Eade teaches ([0180]): "Sequence 680 of FIG. 23 may be executed in the primary control loop for the vehicle, e.g., in block 622 of FIG. 20, and may correspond generally to the map pose functionality 200 of the localization subsystem 152 of FIG. 2." Eade further teaches ([0181]): " Sequence 680 of FIG. 23 therefore begins in block 682 by assembling sensor data (e.g., LIDAR data) into a point cloud to represent the surfaces that are currently visible from the vehicle. " FIG. 20, included above, demonstrates that trajectory planning is provided as part of the primary control loop for the vehicle, which is described above as utilizing the determined target quantity of point groups visible from the specified viewing point in the surrounding area.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae to incorporate the teachings of Eade to provide determining point groups in a surrounding area for navigation for an at least partially automated moving vehicle, and providing, for use for navigation and/or movement of the at least partially automated moving vehicle: (i) the determined target quantity of point groups visible from the specified viewing point in the surrounding area, and/or (ii) the determine target quantity of point groups not visible from the specified viewing point in the surrounding area. Macrae and Eade are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Eade, as doing so beneficially improves vehicle localization by providing a more precise and accurate representation of the environment surrounding the autonomous vehicle and the vehicle's location and orientation within that environment, as recognized by Eade (see at least [0085]). The implementation of Eade provides the further benefit of allowing for perceiving dynamic objects such as pedestrians and other vehicles within the environment, as recognized by Eade (see at least [0088]).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Macrae and Eade in view of Kanzawa (US 2020/0386894 A1).
Regarding claim 17, Macrae and Eade teach the aforementioned limitations of claim 16. However, Macrae does not outright teach that the specified first and/or the second threshold value depends on a respective minimum or maximum spacing, corresponding to the respective minimum or maximum spacing multiplied by a respective factor. Kanzawa teaches systems and methods for reducing LiDAR points, comprising:
the specified first and/or the second threshold value depends on a respective minimum or maximum spacing, corresponding to the respective minimum or maximum spacing multiplied by a respective factor.
Kanzawa teaches ([0049]): "At 720, the reduction module 220 determines a minimum distance for the points 285 associated with the cell. The reduction module 220 may inspect the distances calculated at 710, and may select the minimum distance from among the calculated distances." Kanzawa further teaches ([0050]): "At 730, the reduction module 220 calculates the threshold 293 for the cell based on the minimum distance. The reduction module 220 may calculate the threshold 293 for the cell such that a cell with a low minimum distance receives a lower threshold 293 than a cell with a high minimum distance. The threshold 293 may be proportional to the calculated minimum distance. Any method for calculating a threshold 293 may be used."
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Macrae and Eade to incorporate the teachings of Kanzawa to provide that the specified first and/or the second threshold value depends on a respective minimum or maximum spacing, corresponding to the respective minimum or maximum spacing multiplied by a respective factor. Macrae, Eade, and Kanzawa are each directed towards similar pursuits in the field of point cloud processing. Accordingly, one of ordinary skill in the art would find it advantageous to incorporate the teachings of Kanzawa, as incorporating the threshold value of Kanzawa advantageously allows for removal of points from close cells, thereby reducing the total number of points without compromising the effectiveness of the remaining points with respect to one or more vehicle functions, as recognized by Kanzawa (see at least [0029]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dickmann et al. (US 2006/0023917 A1) teaches an object detection method for vehicles which utilizes a grid having cells, wherein an incremental dimension of the grid cells increases at least over one partial region of the grid, with increasing radial distance from the sensor (see at least [0037]). Qiu et al. (US 2020/0124725 A1) teaches navigable region recognition and topology matching, including identifying a subset of a plurality of scanning points (see at least [0005]), and utilizing grids based on a polar coordinate system (see at least [0036]-[0038]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK T GLENN III whose telephone number is (571)272-5078. The examiner can normally be reached M-F 7:30AM - 4:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/F.T.G./Examiner, Art Unit 3662
/DALE W HILGENDORF/Primary Examiner, Art Unit 3662