Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2022-104777, filed on 06/29/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/04/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“first acquisition unit” (claims 1 and 10): The specification describes the first acquisition unit 110 as acquiring first sensor data (image data) from the first sensor 20 (camera), including data capable of detecting fixed and non-fixed objects in the periphery of the autonomous moving body. Accordingly, under the broadest reasonable interpretation, the first acquisition unit is interpreted as a processor-executed acquisition module (hybrid hardware/software) interfaced to the camera sensor for obtaining first sensor data.
“second acquisition unit” (claims 1 and 10): The specification describes the second acquisition unit 111 as acquiring acceleration and angular velocity (second sensor data) from the second sensor 21 (IMU). Accordingly, the second acquisition unit is interpreted as a processor-executed acquisition module (hybrid hardware/software) interfaced to the IMU for obtaining second sensor data.
“first estimation unit” (claims 1, 5, and 10): The specification describes the first estimation unit 113 as estimating a partial parameter (at least one of six degrees-of-freedom parameters) based on known information D2 recorded in storage and on the first sensor data, and further defines the known information and mark examples (e.g., paint/white line, marker/signboard). The specification also provides additional functional detail for embodiments in which the first estimation unit estimates horizontal-direction position and posture (rotation around a vertical axis) from landmarks/marks. See, e.g., [0177]. Accordingly, the first estimation unit is interpreted as a processor-executed estimation module (hybrid hardware/software, algorithmic) that derives a partial pose parameter from first sensor data and pre-recorded known information.
“second estimation unit” (claims 1–4 and 10): The specification describes the second estimation unit 114 as estimating the self-position represented by six degrees-of-freedom parameters based on the first sensor data, second sensor data, and the partial parameter estimated by the first estimation unit 113. Accordingly, the second estimation unit is interpreted as a processor-executed estimation/fusion module (hybrid hardware/software, algorithmic) that estimates full self-position (6-DoF pose) using first sensor data, second sensor data, and the partial parameter.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1–5, 8, and 9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, namely mathematical concepts and mental processes, and the claims do not recite additional elements that integrate the exception into a practical application or amount to significantly more than the exception itself.
The claims are directed to estimating self-position of an autonomous moving body using sensor data, known information, partial-parameter estimation, and self-position estimation. Although drafted as a device, method, and program, the claims are focused on collecting information, analyzing that information, and producing a positional estimate.
Step 1
Claims 1–5 are directed to a statutory machine, Claim 8 is directed to a statutory process, and Claim 9 is directed to a statutory manufacture. Accordingly, the analysis proceeds to Step 2A.
Step 2A, Prong One
The claims recite an abstract idea. More specifically, the claims recite mathematical concepts because they require estimating a partial parameter and a self-position represented by six degrees-of-freedom parameters. The dependent claims further recite optimization calculation using constraints based on the partial parameter and motion of the autonomous moving body. These limitations recite mathematical relationships, calculations, and optimization techniques for determining position and posture.
The claims also recite mental processes because they involve acquiring sensor data, comparing that data with known information regarding marks or landmarks, evaluating shapes derived from extracted landmarks, and determining position-related parameters from those observations. These are observation, comparison, evaluation, and judgment steps that, in substance, amount to information analysis.
Accordingly, the claims recite abstract ideas in the form of mathematical localization calculations and information analysis.
Step 2A, Prong Two
The claims do not integrate the judicial exception into a practical application. Although the claims are presented in the context of an autonomous moving body, that field of use does not meaningfully limit the abstract idea. The claims do not recite controlling the autonomous moving body, changing its movement, steering, braking, route correction, collision avoidance, or otherwise applying the estimated self-position to improve machine operation. Instead, the claims end with the result of estimating a parameter or self-position.
The additional elements also do not integrate the exception into a practical application. The claims recite a first acquisition unit, second acquisition unit, first estimation unit, and second estimation unit. These elements merely gather data and perform the claimed analysis. They do not recite any particular improvement in computer functionality, sensor operation, or machine control. Rather, they serve as tools for carrying out the abstract calculation.
Likewise, the claim limitations directed to landmarks, known information, connecting landmarks into shapes, partial parameters, optimization constraints, and motion constraints do not impose a meaningful limit on the abstract idea. These limitations merely define the particular data inputs, rules, and constraints used in performing the estimation. They refine the abstract analysis itself, but they do not apply the result in any meaningful technological manner beyond the calculation.
Any alleged improvement in estimation accuracy, speed, or reduction of cumulative error is an improvement in the result of the mathematical analysis, not an improvement in the functioning of the computer, sensors, or another technology. The claims therefore are not directed to a technological improvement, but instead to using mathematical analysis to generate a more accurate estimated position.
Accordingly, the claims do not integrate the judicial exception into a practical application.
Step 2B
The claims do not recite significantly more than the judicial exception.
The additional elements, considered individually and as an ordered combination, amount only to generic data acquisition and data processing used to perform the abstract idea. The claims recite generic functional components for obtaining sensor data and performing estimation. There is no recitation of a particular machine configuration, a specialized sensor architecture, a specific control mechanism, or any non-conventional technical implementation that would transform the abstract idea into patent-eligible subject matter.
Further, the ordered combination of elements does not add an inventive concept. The claims simply recite collecting information, performing mathematical estimation using that information, optionally applying optimization constraints, and outputting an estimated self-position. That combination merely implements the abstract idea in a routine and conventional manner.
Limiting the claims to an autonomous moving body also does not supply significantly more. Restricting an abstract idea to a particular technological environment is not sufficient to confer eligibility.
Nor does the absence of complete preemption render the claims eligible. Even if the claims do not preempt every application of localization analysis, they remain directed to an abstract idea without significantly more.
Examiner 101 Conclusion
Claims 1–5, 8, and 9 are therefore rejected under 35 U.S.C. 101 as being directed to patent-ineligible subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Sano (US 2020/0019792 A1), in view of Roumeliotis (US 2014/0333741 A1), and in view of Zhang (US 2019 / 0271549 A1).
Regarding Claim 1,
Disclosure by Sano
Sano discloses:
A self-position estimation device
See at least: “self-position estimation device” ([0001]); “SELF POSITION-ESTIMATION UNIT 36” (FIG. 1)
Rationale: The reference expressly discloses a device architecture for self-position estimation, including a dedicated self-position estimation unit.
that estimates a self-position
See at least: “thereby estimating a self-position which is a current position of the moving object.” (Abstract); “The self-position estimation unit 36 estimates a self-position which is a current position of the vehicle V …” ([0029])
Rationale: Sano expressly discloses estimating a self-position/current position.
in a driving area
See at least: “target present on a road or around the road” (Abstract / [0029])
Rationale: The “road or around the road” environment where targets are detected corresponds to the claimed driving area.
of an autonomous moving body,
See at least: “The moving object is not limited to the vehicle V … but includes … other moving objects.” ([0064]); “automatic driving control” ([0026])
Rationale: The disclosure of a vehicle or moving object utilizing automatic driving control maps to the claimed autonomous moving body.
the self-position estimation device comprising:
See at least: “The processing unit 3 includes: a target position detection unit 31, a moved amount estimation unit 32... a self-position estimation unit 36...” ([0026])
Rationale: This is an express multi-unit device composition disclosure for the self-position estimation device.
a first acquisition unit
See at least: “surrounding sensor group 1” / “TARGET POSITION DETECTION UNIT 31” (FIG. 1)
Rationale: The surrounding sensor group and target position detection unit function as a first acquisition unit.
that acquires first sensor data
See at least: “The target position detection unit 31 detects a relative position ... on the basis of a detection result of at least any one of the LRFs 101 and 102 and the cameras 201 and 202.” ([0027])
Rationale: The detection results from the external-facing cameras and LRFs constitute the first sensor data.
capable of detecting a fixed object
See at least: “curb 61 can be detected …” ([0032]); “detects white lines 62 and 63 … present at both sides of the vehicle V” ([0033]); “targets present on a road or around the road” (Abstract)
Rationale: Curbs and white lines are stationary road features, satisfying the fixed object limitation.
always present in a periphery of the autonomous moving body
See at least: “detects white lines 62 and 63 … present at both sides of the vehicle V” ([0033])
Rationale: Lane boundaries and curbs are permanent environmental features consistently located in the periphery of the vehicle.
and a non-fixed object
See at least: “target present in the surroundings of the vehicle V” ([0027]); “target attribute estimation unit 37 estimates an attribute of the target …” ([0029])
Rationale: Sano expressly detects general surrounding targets and performs attribute estimation/reliability filtering for localization. In a road-environment sensing stream, a PHOSITA would understand that the detected targets necessarily include both fixed landmarks and transient non-fixed objects (e.g., vehicles or pedestrians); Sano's disclosure of filtering "unreliable" targets for localization renders obvious the detection of non-fixed objects.
temporarily present in the periphery of the autonomous moving body;
See at least: “eliminate unreliable target” (FIG. 3, S11)
Rationale: Sano’s reliability-based elimination of targets in a road-scene localization pipeline predictably de-prioritizes detections that are not map-consistent. A PHOSITA would recognize that such detections include objects temporarily present in the periphery, rendering obvious the claimed limitation.
a second acquisition unit
See at least: “vehicle sensor group 5 includes …” ([0025])
Rationale: The vehicle sensor group and its outputs satisfy the second acquisition unit limitation.
that acquires second sensor data
See at least: “Each sensor 51 to 57 is connected to the processing unit 3 and is configured to sequentially output various detection results to the processing unit 3.” ([0025])
Rationale: The output of the vehicle sensors constitutes the second sensor data.
including an acceleration
See at least: “an acceleration sensor 56 …” ([0025])
Rationale: The system acquires data including an acceleration.
and an angular velocity
See at least: “other sensors, such as a yaw rate sensor” ([0025]); “inertia measurement method using a gyroscope” ([0025])
Rationale: Yaw rate/gyroscope measurements are measurements of angular velocity.
of the autonomous moving body;
See at least: “moved amount of the vehicle V” ([0025])
Rationale: Acceleration and angular velocity measurements are motion data of the autonomous moving body (vehicle).
a first estimation unit
See at least: “moved amount estimation unit 32…straight line extracting unit 34” ([0026])
Rationale: Sano’s moved amount estimation unit and straight line extracting unit collectively perform intermediate estimation/processing prior to the final self-position estimation unit, and thus correspond to the claimed first estimation unit.
that estimates a partial parameter
See at least: “The moved amount estimation unit 32 detects an odometry which is a moved amount of the vehicle V …” ([0028])
Rationale: Sano’s odometry/moved amount and line-based intermediate processing are used before the final self-position estimation and therefore constitute intermediate estimated quantities, rendering obvious the claimed partial parameter.
representing a position
See at least: “X coordinate” / “Y coordinate” ([0054])
Rationale: The intermediate quantities include coordinates representing a position.
and a posture
See at least: “azimuth angle (yaw angle θ)” ([0054])
Rationale: Yaw represents the posture (heading) of the vehicle.
of the autonomous moving body
See at least: “position and an attitude angle ... of the vehicle V” ([0054])
Rationale: These parameters describe the state of the autonomous moving body.
in a three-dimensional coordinate system,
See at least: “The relative position detected by the target position detection unit 31 is a position in a vehicle coordinate system.” ([0027]); “a target actually having a three-dimensional structure such as curbs …” ([0024])
Rationale: Sano operates in a 3D environment and recognizes 3D structures, supporting the claimed three-dimensional coordinate system context.
based on known information,
See at least: “map information 41 …” ([0024]); “previously set in the…unit.” ([0027])
Rationale: The pre-stored map information is the known information.
in which information
See at least: “the storage unit 4 … store map information 41” ([0024])
Rationale: The storage unit contains the information in which landmarks are recorded.
including a position
See at least: “map information 41 including position information on targets” ([0024])
Rationale: The map records including a position for target objects.
in the driving area
See at least: “target present on a road or around the road” (Abstract)
Rationale: The map data covers the driving area (road).
of a mark
See at least: “targets (landmark) recorded in the map information 41” ([0024])
Rationale: A landmark in Sano's map functions as the claimed mark.
that is one fixed object
See at least: “curbs and white lines” ([0024])
Rationale: A curb or white line is one fixed object.
selected in advance
See at least: See at least: “map information 41 …” ([0024]); “previously set in the…unit.” ([0027]); “The self-position estimation unit 36 estimates a self-position … by comparing the selected target position data with the map information …” ([0029])
Rationale: Sano’s localization operates by comparing runtime-detected target data against map information that is pre-stored before the localization operation. A PHOSITA would understand that the fixed objects used as map landmarks are selected in advance and recorded for later matching.
from a plurality of fixed objects
See at least: “map information 41 … curbs and white lines … median strips” ([0021], [0024])
Rationale: The map includes information selected from a plurality of fixed objects.
is recorded,
See at least: "The targets…recorded in the map information..." ([0024])
Rationale: The landmark information is recorded in the storage unit.
and the on first sensor data;
See at least: “comparing the selected target position data with the map information …” ([0029])
Rationale: The estimation is based on target detection derived from first sensor data (cameras/LRFs) matched to the map.
and a second estimation unit
See at least: “self-position estimation unit 36” ([0026])
Rationale: Unit 36 functions as the second estimation unit.
that estimates a self-position of the autonomous moving body
See at least: “The self-position estimation unit 36 estimates a self-position which is a current position of the vehicle V …” ([0029])
Rationale: This unit estimates a self-position of the autonomous moving body.
based on the first sensor data,
See at least: “detects … on the basis of … LRFs … and … cameras” ([0027])
Rationale: The unit estimates the position based on the first sensor data (LRF/camera results).
the second sensor data,
See at least: “vehicle sensor group 5 … output various detection results” ([0025])
Rationale: The unit estimates the position based on the results from the vehicle sensors, which is the second sensor data.
and the partial parameter,
See at least: “target position storing unit 33 stores a position where the relative position of the target … is moved by the moved amount … as target position data” ([0028])
Rationale: Sano’s final estimate integrates first sensor data (targets), second sensor data (motion), and the partial parameter (moved amount).
Claim limitations Not Explicitly Disclosed by Sano
that is a part of six degrees-of-freedom parameters
represented by the six degrees-of-freedom parameters
wherein the mark is a plurality of landmarks attached to a part of the fixed object
and the first estimation unit estimates a position in the first axis direction extending in the horizontal direction and the posture represented by the rotation around the third axis extending in the vertical direction, among the six degrees-of-freedom parameters, based on a shape obtained by connecting the plurality of landmarks extracted from the first sensor data and a shape obtained from a disposition of the landmarks recorded in the known information, as the partial parameter.
Disclosure by Roumeliotis
Roumeliotis discloses:
that is a part of six degrees-of-freedom parameters represented by the six degrees-of-freedom parameters
See at least: “track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform.” ([0003]); “Estimator 22 … process image data 14 and IMU data 18 to compute state estimates for the degrees of freedom of VINS 10 …” ([0031])
Rationale: Roumeliotis expressly supplies the six degrees-of-freedom parameters framework for position and orientation estimation.
Motivation to Combine Sano and Roumeliotis
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano and Roumeliotis before them, to implement Sano's map-matching pipeline within the explicit vision/IMU six-degrees-of-freedom state-estimation framework of Roumeliotis. Both references address autonomous localization using multimodal sensors (cameras/inertial sensors). A PHOSITA would recognize that Sano's map-based landmark selection provides external constraints that, when integrated into Roumeliotis's 6-DoF estimation model, would provide a more robust 3D pose representation while preserving the localization advantages of Sano’s map-matching.
Disclosure by Zhang
Zhang discloses:
wherein the mark is a plurality of landmarks attached to a part of the fixed object
See at least: “edgels within a threshold distance from each other may be automatically clustered on the map” ([0119]); “edgels at the endpoints of the lane line segment” ([0116]); “Landmark Map” (FIG. 2)
Rationale: Zhang teaches mapped edgel clusters associated with road features (lane lines/fixed objects). While not a verbatim disclosure, a PHOSITA would have recognized that representing multiple local feature points on a single structure (e.g., a lane line segment) is an obvious equivalent of a plurality of landmarks attached to a part of the fixed object.
and the first estimation unit estimates a position in the first axis direction extending in the horizontal direction
See at least: “edgels at the endpoints of the lane line segment... provide constraints on both dimensions (e.g. x and y directions)” ([0116])
Rationale: Zhang teaches determining localization constraints along specific dimensions (X and Y), which provides the claimed position in the first axis direction extending in the horizontal direction.
and the posture represented by the rotation around the third axis extending in the vertical direction, among the six degrees-of-freedom parameters,
See at least: “A pose of the vehicle is optimized…” (Abstract); “… rigid 3D transform T” ([0126]) (Zhang); “track the six-degrees-of-freedom (d.o.f.) position and orientation (pose)” ([0003]) (Roumeliotis)
Rationale: Zhang teaches optimization of vehicle pose, and Roumeliotis expressly provides the orientation framework. In vehicle localization, a PHOSITA would understand the claimed yaw/posture subset as rotation around the third axis extending in the vertical direction.
based on a shape
See at least: “line geometry is computed for certain groups of edgels” ([0119])
Rationale: Zhang expressly utilizes geometric structure (shape) for localization.
obtained by connecting the plurality of landmarks
See at least: “line segments in 3D space connecting groups of edgels” ([0119])
Rationale: Zhang teaches forming line segments obtained by connecting the plurality of landmarks (edgels).
extracted from the first sensor data
See at least: “and analyzing the image frame to identify a plurality of edge pixels within the image frame” ([0008]); “edgels corresponding to three-dimensional locations are loaded and mapped to corresponding edge pixels …” (Abstract)
Rationale: Zhang extracts edge pixels from an image frame (first sensor data), thus the landmarks/edgels are extracted from the first sensor data.
and a shape obtained from a disposition of the landmarks
See at least: “the localization system determines structure information [disposition] for an edgel cluster … The determined structure information may be used to help guide correspondence search.” ([0125])
Rationale: Zhang utilizes the disposition (structure) of edgel clusters to guide the correspondence matching.
recorded in the known information,
See at least: “Landmark Map” (FIG. 2 / FIG. 5); “edgels within a threshold distance from each other may be automatically clustered on the map” ([0119]); “line geometry is computed … and stored as part of the map” ([0119])
Rationale: Zhang expressly stores clustered edgels and line geometry in a map (known information), thus the landmark information is recorded in the known information.
as the partial parameter.
See at least: “minimize a distance between the edgels and their corresponding edge pixels.” ([0007])
Rationale: The transformation determined via Zhang's shape-based matching functions as the partial parameter (horizontal pose/yaw) within the combined fused estimation system.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to augment the combined Sano/Roumeliotis system with Zhang’s edgel-clustering and structure-guided pose optimization. Zhang teaches an analogous autonomous-vehicle localization improvement using plural mapped feature points and their structural geometry. Incorporating these teachings into the Sano/Roumeliotis pipeline would predictably improve the robustness of the partial-parameter estimation by providing specific geometric constraints from roadway features extracted from image data, yielding more accurate final pose results in real driving environments.
Regarding Claim 2,
The combination of Sano, Roumeliotis, and Zhang establishes the self-position estimation device of Claim 1, which is the basis for Claim 2.
Disclosure by Roumeliotis
Roumeliotis discloses:
wherein the second estimation unit estimates a parameter
See at least: “compute respective state estimates for a position and orientation of the VINS” ([0031])
Rationale: Roumeliotis’s estimator (which corresponds to the second estimation unit of the combined system established in Claim 1) computes state estimates, i.e., estimates a parameter of the self-position state.
other than the partial parameter
See at least: “Estimator 22 … process image data 14 and IMU data 18 to compute state estimates for the degrees of freedom of VINS 10 …” ([0031]); “processes the IMU data and the image data associated with the non-keyframes to compute one or more constraints …” ([0031]-[0033]); “The self-position estimation unit 36 estimates a self-position …” ([0029]) (Sano)
Rationale: In the combined system of Claim 1, the first estimation unit provides a partial parameter (as established by the Sano/Roumeliotis/Zhang combination), while Roumeliotis teaches a constrained six-degrees-of-freedom estimation framework for computing the full pose state. A PHOSITA would recognize that the second estimation unit, when solving for the six-degrees-of-freedom pose in that combined system, necessarily estimates at least one parameter other than the partial parameter (and in a full-pose implementation, the remaining pose parameters) to complete the self-position estimate.
among the six degrees-of-freedom parameters.
See at least: “track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform.” ([0003]); “compute state estimates for the degrees of freedom of VINS 10” ([0031])
Rationale: Roumeliotis expressly provides the six-degrees-of-freedom pose framework, and thus the parameter estimated by the second estimation unit in the combined system is among the six degrees-of-freedom parameters.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to configure the second estimation unit of the combined system to estimate at least one parameter other than the partial parameter provided by the first estimation unit. The combined teachings of Sano and Zhang provide the landmark/map-based partial estimation constraints established for Claim 1, while Roumeliotis provides the framework for full six-degrees-of-freedom state estimation. A PHOSITA would have been motivated to have the second estimator calculate at least one additional pose parameter (and in a full 6-DoF implementation, the remaining pose parameters) to achieve a complete self-position representation in a three-dimensional environment, yielding the predictable result of a more complete and accurate pose estimate.
Regarding Claim 3,
The combination of Sano, Roumeliotis, and Zhang establishes the self-position estimation device of Claim 1, which is the basis for Claim 3.
Disclosure by Sano
Sano discloses:
wherein the second estimation unit estimates all the six degrees-of-freedom parameters
See at least: “The self-position estimation unit 36 estimates a self-position which is a current position of the vehicle V …” ([0029]); “The self-position estimation unit 36 estimates a position and an attitude angle of total three degrees of freedom … X coordinate … Y coordinate … yaw angle θ …” ([0054])
Rationale: Sano expressly discloses that the self-position estimation unit performs self-position estimation. While Sano explicitly focuses on three degrees of freedom (planar pose), it establishes the functional role of the second estimation unit in the pose-estimation pipeline, with the explicit all the six degrees-of-freedom parameters framework supplied by the secondary reference below.
Claim limitations Not Explicitly Disclosed by Sano
Sano does not explicitly disclose the following claim limitations:
all the six degrees-of-freedom parameters
by performing optimization calculation
to which a constraint based on the partial parameter is added.
Disclosure by Roumeliotis
Roumeliotis discloses:
all the six degrees-of-freedom parameters
See at least: “track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform.” ([0003]); “Estimator 22 … process image data 14 and IMU data 18 to compute state estimates for the degrees of freedom of VINS 10 …” ([0031])
Rationale: Roumeliotis expressly teaches an estimator (second estimation unit) computing state estimates within a six-degrees-of-freedom pose framework, thereby supplying the claimed all the six degrees-of-freedom parameters.
by performing optimization calculation
See at least: “optimization problem” ([0059]); “Gauss-Newton” ([0044]);
Rationale: Roumeliotis teaches solving the pose estimation through the use of an optimization problem and algorithms like Gauss-Newton, which constitutes performing optimization calculation.
to which a constraint based on the partial parameter is added.
See at least: “a processing unit comprising an estimator that processes the IMU data and the image data to compute state estimates of the VINS. The estimator computes the state estimates of the VINS for the keyframe by constraining the State estimates based on the IMU data and the image data for the one or more non-key frames of the VINS without computing state estimates of the VINS for the one or more non-keyframes.” (Abstract)
Rationale: Roumeliotis teaches performing optimization calculation to which constraints are added. In the combined Claim 1 system, the partial parameter (the intermediate geometric/horizontal pose information) functions as the constraint based on the partial parameter that is added to the second estimator's full-pose optimization.
Motivation to Combine Sano and Roumeliotis
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano and Roumeliotis before them, to implement Sano’s self-position estimation pipeline within Roumeliotis’s six-degrees-of-freedom constrained optimization framework. Sano provides a vehicle self-position estimation pipeline with intermediate pose-related quantities and map-based landmark matching, while Roumeliotis provides explicit full-pose estimation using optimization with constraints. A PHOSITA would be motivated to combine them to achieve a mathematically consistent and robust 6-DoF self-position estimator by using map-derived constraints to stabilize the state estimation, resulting in predictable improvements in 3D navigation accuracy.
Claim limitations Not Explicitly Disclosed by the Combination of Sano and Roumeliotis
After combining the teachings of Sano and Roumeliotis, the following claim limitations are not explicitly disclosed:
to which a constraint based on the partial parameter is added. (specifically regarding the shape/disposition-based partial parameter from Zhang/Claim 1).
Disclosure by Zhang
Zhang provides teachings for the following remaining elements:
to which a constraint based on the partial parameter is added.
See at least: “edgels at the endpoints of the lane line segment may be sufficient for performing localization, since they provide constraints on both dimensions (e.g. x and y directions)” ([0116]); “the localization system determines structure information for an edgel cluster … The determined structure information may be used to help guide correspondence search.” ([0125]); “With a set of edgels and their correspondences, the localization system attempts to optimize the pose of the vehicle …” ([0126])
Rationale: Zhang teaches that shape/feature-derived information (the partial parameter in the combined system) provides localization constraints and is used to guide/optimize the final pose. This strengthens the teaching that the optimization calculation includes a constraint based on the partial parameter.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to incorporate Zhang’s feature-geometry/edgel-based localization constraints into the combined Sano/Roumeliotis optimization-based self-position estimator. Zhang teaches a directly analogous way of deriving geometric constraints from mapped roadway features for pose optimization. Incorporating these specific constraints as the partial parameter into Roumeliotis’s optimization framework would have predictably improved the stability and accuracy of the full six-degrees-of-freedom estimate within Sano’s map-matching localization pipeline.
Regarding Claim 4,
The combination of Sano, Roumeliotis, and Zhang establishes the self-position estimation device of Claim 3, which is the basis for Claim 4.
Disclosure by Roumeliotis
Roumeliotis discloses:
to which a motion constraint of the autonomous moving body is further added.
See at least: “Estimator 22 processes the IMU data and the image data associated with the non-keyframes to compute one or more constraints. Estimator 22 then constrains the state estimates … based on the constraints.” ([0033]); “Inertial Measurement Unit (IMU) … detect a current rate of acceleration … detect changes in rotational attributes like pitch, roll and yaw …” ([0030])
Rationale: Roumeliotis expressly teaches a constrained state-estimation/optimization framework and expressly uses IMU-derived motion information of the moving platform (acceleration and rotational changes). In the combined system established through Claim 3 (which already includes optimization calculation with a constraint based on the partial parameter), a PHOSITA would have recognized it as an obvious and predictable implementation to further add a motion constraint of the autonomous moving body (derived from IMU motion behavior/kinematics) to the optimization calculation. This addition ensures the estimated self-position remains consistent with the physical laws of motion detected by the second acquisition unit.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to further add a motion constraint of the autonomous moving body to the optimization calculation of the second estimation unit in the Claim 3 combined system, because Claim 3’s established combination already provides optimization-based self-position estimation with a partial-parameter-based constraint, and Roumeliotis teaches using motion-derived inertial constraints in a constrained pose-estimation framework. A PHOSITA would have been motivated to include that additional motion constraint to improve physical consistency, trajectory realism, and robustness of the optimized self-position estimate in a three-dimensional driving environment, yielding a predictable improvement in localization accuracy.
Regarding Claim 5,
The combination of Sano, Roumeliotis, and Zhang establishes the self-position estimation device of Claim 1, which is the basis for Claim 5.
Disclosure by Sano
Sano discloses the following limitations:
wherein the mark is at least one of paint applied to the driving area
See at least: “The targets (landmark) recorded in the map information 41 includes, for example, the targets indicating the traveling lane boundaries, e.g. white lines and lane boundaries … road markings …” ([0024])
Rationale: Sano expressly discloses white lines and road markings as mapped roadway landmarks. A PHOSITA would understand such white lines/road markings as paint applied to the driving area, thereby disclosing or rendering obvious this limitation.
and a marker installed in the driving area, and
See at least: “The targets (landmark) recorded in the map information 41 includes … structures, e.g. curbs …” ([0024])
Rationale: Sano expressly discloses fixed roadway structures (e.g., curbs) used as localization landmarks. While not verbatim “marker installed,” a PHOSITA would have recognized such fixed roadway landmark structures as obvious equivalent markers in the driving-area localization context.
the first estimation unit estimates a position
See at least: “The straight line extracting unit 34 extracts linear information …” ([0029]); “The self-position estimation unit 36 estimates a position and an attitude angle …” ([0054])
Rationale: In the Claim 1 combined system, the first estimation unit role is already established as the pre-final estimator that outputs a partial parameter. Sano supports the claimed position-estimation content by expressly disclosing position/attitude estimation and intermediate line-information extraction from roadway landmarks in the localization pipeline.
in at least one of a first axis direction
See at least: “a position in the east-west direction (X coordinate)” ([0054])
Rationale: Sano expressly teaches position estimation in one horizontal axis direction (X), which maps to the recited first axis direction.
and a second axis direction
See at least: “a position in the north-south direction (Y coordinate)” ([0054])
Rationale: Sano expressly teaches position estimation in a second horizontal axis direction (Y), which maps to the recited second axis direction.
extending in a horizontal direction
See at least: “east-west direction (X coordinate)” ([0054]); “north-south direction (Y coordinate)” ([0054])
Rationale: East-west and north-south directions are horizontal directions, thereby disclosing or rendering obvious this limitation of extending in a horizontal direction.
and a posture
See at least: “attitude angle” ([0054])
Rationale: Sano expressly discloses an attitude angle, which is a posture parameter.
represented by rotation around a third axis
See at least: “azimuth angle (yaw angle θ)” ([0054])
Rationale: Sano expressly discloses yaw/azimuth angle. A PHOSITA would understand yaw as rotation around a third axis in a vehicle/world coordinate system, thereby disclosing or rendering obvious this limitation.
extending in a vertical direction,
See at least: “azimuth angle (yaw angle θ)” ([0054]); “vehicle coordinate system” ([0027])
Rationale: In vehicle localization conventions, yaw is rotation about a vertical axis. Thus, Sano’s yaw-angle disclosure renders obvious the recited third axis extending in a vertical direction.
based on at least one of the paint and the marker
See at least: “targets indicating the traveling lane boundaries, e.g. white lines and lane boundaries … road markings … structures, e.g. curbs …” ([0024]); “The self-position estimation unit 36 estimates a self-position … by comparing the selected target position data with the map information …” ([0029])
Rationale: Sano expressly teaches localization using mapped roadway markings/white lines and fixed roadway structures (e.g., curbs) as landmarks. These correspond to the claimed paint and marker categories and support estimation based on at least one such landmark type.
extracted from the first sensor data
See at least: “The target position detection unit 31 detects a relative position between a target present in the surroundings of the vehicle V and the vehicle V on the basis of a detection result of at least any one of the LRFs 101 and 102 and the cameras 201 and 202.” ([0027]); “The straight line extracting unit 34 extracts linear information from the target position data …” ([0029])
Rationale: Sano expressly teaches detecting targets from camera/LRF outputs (first sensor data) and extracting line-related information from that target data, thereby disclosing or rendering obvious extraction of paint/marker-based landmark information extracted from the first sensor data.
and on the known information,
See at least: “map information 41” ([0024]); “The self-position estimation unit 36 estimates a self-position … by comparing the selected target position data with the map information …” ([0029])
Rationale: Sano expressly teaches using pre-stored map information (known information) in localization estimation.
as the partial parameter.
See at least: “The moved amount estimation unit 32 detects an odometry which is a moved amount of the vehicle V …” ([0028]); “The self-position estimation unit 36 estimates a position and an attitude angle …” ([0054])
Rationale: In the Claim 1 combined system, the first estimation unit is established as outputting the partial parameter. Sano supports the content and pipeline context for that partial parameter by disclosing intermediate estimated quantities and position/attitude estimation derived from mapped roadway landmarks prior to/within the overall self-position estimation process.
Claim limitations Not Explicitly Disclosed by Sano
Sano does not explicitly disclose the following claim limitations:
among the six degrees-of-freedom parameters,
Disclosure by Roumeliotis
Roumeliotis discloses:
among the six degrees-of-freedom parameters,
See at least: “VINS fuses data from a camera and an Inertial Measurement Unit (IMU) to track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform.” ([0003]); “Estimator 22 … process image data 14 and IMU data 18 to compute state estimates for the degrees of freedom of VINS 10 …” ([0031])
Rationale: Roumeliotis expressly provides the six-degrees-of-freedom pose framework. Thus, the position/posture subset estimated by the first estimation unit in the established Claim 1 combined system is among the six degrees-of-freedom parameters.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, with Zhang retained as part of the already-established Claim 1 combination, to implement Sano’s roadway-landmark-based position/posture estimation within Roumeliotis’s explicit six-degrees-of-freedom pose framework, because Sano teaches estimation based on fixed roadway features including road markings/white lines and curbs, and Roumeliotis teaches an express 6-DoF state representation for localization, and combining them would predictably clarify that the recited horizontal position and yaw/posture quantities are a subset among the six degrees of freedom in a three-dimensional self-position estimation system.
Regarding Claim 8,
Disclosure by Sano
Sano teaches:
A self-position estimation method
See at least: "The present embodiment relates to a self-position estimation method and a self-position estimation device" ([0019])
Rationale: Sano expressly teaches a self-position estimation method for a vehicle.
that estimates a self-position
See at least: "estimate a self-position which is a current position of the vehicle V" ([0019])
Rationale: The method is performed to estimates a self-position.
in a driving area
See at least: "target present on a road or around the road" (Abstract)
Rationale: The "road or around the road" environment corresponds to the claimed driving area.
of an autonomous moving body,
See at least: "The moving object is not limited to the vehicle V ... but includes ... other moving objects." ([0064]); "automatic driving control" ([0026])
Rationale: A vehicle or moving object utilizing automatic driving control teaches the claimed autonomous moving body.
the self-position estimation method comprising:
See at least: "FIG. 3 is a flowchart showing a process of the self-position estimation method according to the present embodiment." ([0047])
Rationale: Sano teaches a method comprising the steps shown in the process flowchart.
a step of acquiring first sensor data
See at least: "surrounding sensor group 1" ([0021]); "The target position detection unit 31 detects a relative position ... on the basis of a detection result of at least any one of the LRFs 101 and 102 and the cameras 201 and 202." ([0027])
Rationale: Detecting relative positions of targets using cameras and LRFs (the surrounding sensor group) teaches a step of acquiring first sensor data.
capable of detecting a fixed object
See at least: "curb 61 can be detected ..." ([0030]); "detects white lines 62 and 63 ... present at both sides of the vehicle V" ([0033])
Rationale: Sano teaches detecting a fixed object such as white lines and curbs.
always present in a periphery of the autonomous moving body
See at least: "detects white lines 62 and 63 … present at both sides of the vehicle V" ([0033])
Rationale: Permanent structures like lane lines are always present in a periphery (at both sides) of the moving body during travel.
and a non-fixed object
See at least: "target present in the surroundings of the vehicle V" ([0027]); "target attribute estimation unit 37 estimates an attribute of the target ..." ([0029])
Rationale: Sano teaches detecting surrounding targets and performing attribute/reliability estimation to select map landmarks. A PHOSITA would understand that road-environment sensor data necessarily include both fixed landmarks and transient non-fixed objects (e.g., other vehicles); Sano's teaching of filtering "unreliable" targets for map-matching renders obvious the acquisition of data representing non-fixed objects.
temporarily present in the periphery of the autonomous moving body;
See at least: "eliminate unreliable target" (FIG. 3, S11)
Rationale: Sano’s elimination of targets that are not map-consistent teaches the handling of objects temporarily present in the periphery, which are de-prioritized to ensure stable localization.
a step of acquiring second sensor data
See at least: "vehicle sensor group 5 includes ..." ([0025]); "Each sensor 51 to 57 is connected to the processing unit 3 and is configured to sequentially output various detection results to the processing unit 3." ([0025])
Rationale: Collecting outputs from internal vehicle sensors teaches a step of acquiring second sensor data.
including an acceleration
See at least: "an acceleration sensor 56 ..." ([0025])
Rationale: The second sensor data including an acceleration.
and an angular velocity
See at least: "other sensors, such as a yaw rate sensor" ([0025]); "inertia measurement method using a gyroscope" ([0025])
Rationale: Yaw rate and gyroscope measurements teach an angular velocity.
of the autonomous moving body;
See at least: "moved amount of the vehicle V" ([0025], [0028])
Rationale: These sensors detect the motion state of the autonomous moving body (vehicle).
a step of estimating a partial parameter
See at least: "The moved amount estimation unit 32 detects an odometry which is a moved amount of the vehicle V ..." ([0028])
Rationale: Sano expressly teaches intermediate estimation within the localization pipeline, including odometry/moved-amount estimation and line-information extraction prior to final self-position estimation. In the combined Claim 8 method established by Sano, Roumeliotis, and Zhang, these intermediate estimated quantities support the recited step of estimating a partial parameter, with the explicit six-degrees-of-freedom framework and shape/disposition-based partial-parameter content supplied by the combination.
representing a position
See at least: "X coordinate" / "Y coordinate" ([0054])
Rationale: The parameters include coordinates representing a position.
and a posture
See at least: "azimuth angle (yaw angle θ)" ([0054])
Rationale: Yaw represents the heading or posture of the moving body.
of the autonomous moving body
See at least: "position and an attitude angle ... of the vehicle V" ([0054])
Rationale: These represent the state of the autonomous moving body.
in a three-dimensional coordinate system,
See at least: "The relative position detected by the target position detection unit 31 is a position in a vehicle coordinate system." ([0027]); "target actually having a three-dimensional structure such as curbs ..." ([0024])
Rationale: Sano teaches operating in a three-dimensional coordinate system (vehicle coordinate system) and recognizing 3D structures.
based on known information,
See at least: “map information 41 …” ([0024]); “previously set in the…unit.” ([0027])
Rationale: The pre-stored map information is the known information.
in which information
See at least: "the storage unit 4 … store map information 41" ([0024])
including a position
See at least: "map information 41 including position information on targets" ([0024])
Rationale: The map stores information including a position for roadway targets.
in the driving area
See at least: "target present on a road or around the road" (Abstract)
Rationale: The map info covers the driving area (the road).
of a mark
See at least: "targets (landmark) recorded in the map information 41" ([0024])
Rationale: The map mark is taught as a landmark.
that is one fixed object
See at least: "curbs and white lines" ([0024])
Rationale: A curb is one fixed object.
selected in advance
See at least: "map information 41 … previously stored" ([0024]); "estimates a self-position … by comparing the selected target position data with the map information …" ([0029])
Rationale: Sano teaches matching detected data against a pre-stored map. A PHOSITA would understand that the map landmarks are selected in advance and recorded for later localization matching.
from a plurality of fixed objects
See at least: "map information 41 … curbs and white lines … median strips" ([0021], [0024])
Rationale: The map records marks selected from a plurality of fixed objects.
is recorded,
See at least: "The targets…recorded in the map information..." ([0024])
Rationale: The landmark information is recorded in the storage unit.
and on the first sensor data;
See at least: "comparing the selected target position data with the map information …" ([0029])
Rationale: Estimating the state is performed based on the first sensor data (target detections) matched against the map.
and a step of estimating a self-position of the autonomous moving body
See at least: "The self-position estimation unit 36 estimates a self-position which is a current position of the vehicle V …" ([0029])
Rationale: This teaches a step of estimating a self-position of the autonomous moving body.
based on the first sensor data,
See at least: "detects … on the basis of … LRFs … and … cameras" ([0027])
Rationale: The final estimate is based on the first sensor data (LRF/camera).
the second sensor data,
See at least: "vehicle sensor group 5 … output various detection results" ([0025])
Rationale: The estimate is based on vehicle sensors, i.e., the second sensor data.
and the partial parameter,
See at least: "stores a position where the relative position of the target … is moved by the moved amount … as target position data … compare … with map information" ([0028]-[0029])
Rationale: Sano teaches downstream self-position estimation using intermediate estimated quantities (e.g., moved amount / processed target position data) together with sensor and map information. In the combined Claim 8 method, those intermediate quantities correspond to the recited partial parameter input used in the self-position estimation step, rather than a Sano-only explicit disclosure of the exact claimed partial parameter architecture.
Claim limitations Not Explicitly Disclosed by Sano
Sano does not explicitly teach the following claim limitations:
that is a part of six degrees-of-freedom parameters
represented by the six degrees-of-freedom parameters
wherein the mark is a plurality of landmarks attached to a part of the fixed object,
and in the step of acquiring first sensor data, estimating a position in the first axis direction extending in the horizontal direction and the posture represented by the rotation around the third axis extending in the vertical direction, among the six degrees-of-freedom parameters, based on a shape obtained by connecting the plurality of landmarks extracted from the first sensor data and a shape obtained from a disposition of the landmarks recorded in the known information, as the partial parameter.
Disclosure by Roumeliotis
Roumeliotis teaches:
that is a part of six degrees-of-freedom parameters / represented by the six degrees-of-freedom parameters
See at least: "track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform." ([0003]); "Estimator 22 … process image data 14 and IMU data 18 to compute state estimates for the degrees of freedom of VINS 10 …" ([0031])
Rationale: Roumeliotis expressly teaches estimation within a six degrees-of-freedom parameters framework.
Motivation to Combine Sano and Roumeliotis
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano and Roumeliotis before them, to implement Sano’s roadway-landmark-based localization within Roumeliotis’s explicit six-degrees-of-freedom pose framework. Sano teaches map-based localization using vehicle and environmental sensors, while Roumeliotis teaches an express 6-DoF state representation. A PHOSITA would combine them to enable the autonomous moving body to navigate more complex 3D environments where pitch and roll are relevant, resulting in a more robust and accurate 3D self-position estimate.
Disclosure by Zhang
Zhang teaches:
wherein the mark is a plurality of landmarks attached to a part of the fixed object
See at least: "edgels within a threshold distance from each other may be automatically clustered on the map" ([0119]); "edgels at the endpoints of the lane line segment" ([0116]); "Landmark Map" (FIG. 2)
Rationale: Zhang teaches multiple mapped feature points (edgels) grouped on a common fixed roadway feature (e.g., a lane line segment), and Sano teaches fixed roadway landmarks used for localization. While Zhang does not verbatim recite “attached,” a PHOSITA would have recognized that representing multiple local feature points on a single fixed road feature is an obvious implementation equivalent of a plurality of landmarks attached to a part of the fixed object in a map-based vehicle localization method.
and in the step of acquiring first sensor data, estimating a position in the first axis direction extending in the horizontal direction and the posture represented by the rotation around the third axis extending in the vertical direction, among the six degrees-of-freedom parameters, based on a shape obtained by connecting the plurality of landmarks extracted from the first sensor data
See at least: "edgels at the endpoints of the lane line segment... provide constraints on both dimensions (e.g. x and y directions)" ([0116]); "analyze the image frame to identify a plurality of edge pixels" ([0113]); "line geometry is computed for certain groups of edgels, such as line segments in 3D space connecting groups of edgels" ([0119])
Rationale: Zhang teaches image-frame feature extraction (edge pixels/edgels), connected-feature geometry (line segments connecting edgels), mapped structure/disposition information, and pose optimization using those correspondences, including x/y localization constraints from roadway features. In combination with Roumeliotis’s explicit six-degrees-of-freedom pose framework and Sano’s map-based self-position estimation method, a PHOSITA would have recognized this as teaching or rendering obvious the recited method limitation as written, including estimating a position in the first axis direction extending in the horizontal direction and the posture represented by the rotation around the third axis extending in the vertical direction, among the six degrees-of-freedom parameters, based on a shape obtained by connecting the plurality of landmarks extracted from the first sensor data within the first-sensor-data acquisition/processing sequence, as the partial parameter.
and a shape obtained from a disposition of the landmarks
See at least: "the localization system determines structure information [disposition] for an edgel cluster … The determined structure information may be used to help guide correspondence search." ([0125])
Rationale: Zhang teaches utilizing geometric structure (shape) and structural disposition (structure information) of landmarks to guide the matching.
recorded in the known information,
See at least: "Landmark Map" (FIG. 2); "line geometry is computed … and stored as part of the map" ([0119]); "map information 41" ([0024]) (Sano)
Rationale: Zhang expressly teaches storing clustered edgels and line geometry in a map, while Sano expressly teaches pre-stored map information used as known information in a vehicle localization pipeline. Thus, in the combined Claim 8 method, the landmark disposition/shape information used for matching is recorded in the known information, with Zhang supplying the mapped feature-geometry recordation and Sano supplying the known-information map framework.
as the partial parameter.
See at least: "minimize a distance between the edgels and their corresponding edge pixels." ([0007])
Rationale: Zhang teaches shape/disposition-based correspondence and pose optimization that yields a constrained pose component from roadway-feature geometry, and Roumeliotis teaches a six-degrees-of-freedom estimation framework. In the combined Claim 8 method with Sano’s map-based localization pipeline, a PHOSITA would have used this shape-guided horizontal/yaw pose component as the partial parameter input to the later self-position estimation step; thus this limitation is disclosed or rendered obvious by the combination rather than by Zhang alone.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to incorporate Zhang’s edgel-clustering and line-geometry matching into the combined Sano/Roumeliotis 6-DoF localization system. A PHOSITA would further have implemented the feature extraction and shape-derivation operations from Zhang as part of the first-sensor-data acquisition/processing step in the combined method, consistent with the claimed procedural phrasing and with standard localization pipeline design. Zhang teaches an analogous way of deriving geometric constraints from roadway features to optimize vehicle pose. Integrating these specific shape-based constraints into the map-matching pipeline would predictably improve the robustness of the partial-parameter estimation, yielding more accurate and physically realistic self-position results for an autonomous moving body.
Regarding Claim 9,
The combination of Sano, Roumeliotis, and Zhang establishes the self-position estimation device of Claim 1, which is the basis for Claim 9.
Disclosure by Sano
Sano teaches:
A program that causes a self-position estimation device to execute:
See at least: "The processing unit 3 is a microcontroller... and is configured to execute a program stored in the storage unit 4." ([0026])
Rationale: Sano teaches a microcontroller configured to execute a program that causes the self-position estimation device (established in claim 1) to perform the localized steps of the method.
a step of acquiring first sensor data
See at least: "The target position detection unit 31 detects a relative position ... on the basis of a detection result of at least any one of the LRFs 101 and 102 and the cameras 201 and 202." ([0027])
Rationale: The program executes the process of gathering data from external sensors, which constitutes a step of acquiring first sensor data.
capable of detecting a fixed object
See at least: "targets (landmark) recorded in the map information 41 includes... white lines... structures, e.g. curbs, and the like" ([0024])
Rationale: The sensors utilized by the program are capable of detecting a fixed object such as curbs or white lines recorded in a map.
always present in a periphery of the autonomous moving body
See at least: "detects white lines 62 and 63... present at both sides of the vehicle V" ([0033])
Rationale: Sano teaches fixed roadway features (e.g., lane lines/curbs) in the surroundings/periphery of the vehicle for localization, which discloses or at least renders obvious the recited fixed-object peripheral context corresponding to objects always present in a periphery for map-based self-position estimation.
and a non-fixed object
See at least: "target present in the surroundings of the vehicle V" ([0027]); "target attribute estimation unit 37 estimates an attribute of the target..." ([0029])
Rationale: Sano teaches detecting surrounding targets and performing attribute/reliability estimation. A PHOSITA would understand that a sensor stream in a road environment necessarily includes transient non-fixed objects (e.g., other vehicles), which the program identifies and filters via reliability checks.
temporarily present in the periphery of the autonomous moving body;
See at least: "eliminate unreliable target" (FIG. 3, S11)
Rationale: The program's elimination of targets not present in the map or deemed "unreliable" renders obvious the handling of objects temporarily present in the periphery, which a PHOSITA would recognize as transient, non-map-consistent detections in a road-environment localization pipeline.
a step of acquiring second sensor data
See at least: "vehicle sensor group 5... sequentially output various detection results to the processing unit 3." ([0025])
Rationale: Monitoring internal motion and state data from the vehicle sensor group teaches a step of acquiring second sensor data.
including an acceleration
See at least: "an acceleration sensor 56..." ([0025])
Rationale: The program acquires second sensor data including an acceleration from the onboard sensors.
and an angular velocity
See at least: "inertia measurement method using a gyroscope... yaw rate sensor" ([0025])
Rationale: Measuring yaw rate or using a gyroscope provides a measurement of an angular velocity.
of the autonomous moving body;
See at least: "moved amount of the vehicle V" ([0028])
Rationale: These sensors track the dynamic motion and orientation of the autonomous moving body.
a step of estimating a partial parameter
See at least: "The moved amount estimation unit 32 detects an odometry which is a moved amount of the vehicle V..." ([0028])
Rationale: Sano teaches intermediate estimation within the localization pipeline, such as odometry/moved-amount, which supports a step of estimating a partial parameter prior to final self-position estimation.
representing a position
See at least: "X coordinate... Y coordinate" ([0054])
Rationale: The estimated partial parameters include coordinate values representing a position.
and a posture
See at least: "azimuth angle (yaw angle θ)" ([0054])
Rationale: The azimuth or yaw angle represents the posture or heading of the vehicle.
of the autonomous moving body
See at least: "self-position of the vehicle V" ([0020])
Rationale: These parameters collectively represent the estimated state of the autonomous moving body.
in a three-dimensional coordinate system,
See at least: "vehicle coordinate system... z-axis... targets actually having a three-dimensional structure" ([0027], [0024])
Rationale: The device operates and maps landmarks within a three-dimensional coordinate system.
based on known information,
See at least: “map information 41 …” ([0024]); “previously set in the…unit.” ([0027])
Rationale: The pre-stored map information is the known information.
in which information
See at least: "storage unit 4... store map information 41" ([0024])
Rationale: The storage unit holds the information used during the map-matching process.
including a position
See at least: "position information on targets" ([0024])
Rationale: The known information is recorded including a position for each landmark.
in the driving area
See at least: "targets present on a road or around the road" ([0024])
Rationale: The recorded landmarks are located in the driving area where the moving body operates.
of a mark
See at least: "targets (landmark) recorded in the map information" ([0024])
Rationale: Each landmark recorded in the map serves as the claimed mark.
that is one fixed object
See at least: "curbs and white lines" ([0024])
Rationale: Sano's landmarks are taught as one fixed object such as a curb.
selected in advance
See at least: "map information 41... previously stored" ([0024])
Rationale: Landmarks are selected in advance and stored in the map information 41 prior to operation.
from a plurality of fixed objects
See at least: "aggregate of linear information... curbs... white lines... median strips" ([0024], [0021])
Rationale: The map marks are chosen from a plurality of fixed objects present in the road environment.
is recorded,
See at least: "The targets…recorded in the map information..." ([0024])
Rationale: The landmark information is recorded in the storage unit.
and on the first sensor data;
See at least: "comparing the selected target position data with the map information" ([0029])
Rationale: The estimation is performed based on the first sensor data (detected targets) compared with the map.
and a step of estimating a self-position of the autonomous moving body
See at least: "estimates a self-position which is a current position of the vehicle V" ([0029])
Rationale: The unit 36 performs a step of estimating a self-position of the autonomous moving body.
based on the first sensor data,
See at least: "detects... relative position... on the basis of... LRFs... cameras" ([0027])
Rationale: The self-position estimation is derived based on the first sensor data acquired from surrounding sensors.
the second sensor data,
See at least: "vehicle sensor group 5... output various detection results" ([0025])
Rationale: The final estimation incorporates the outputs of the vehicle sensors, which is the second sensor data.
and the partial parameter,
See at least: “The moved amount estimation unit 32 detects an odometry which is a moved amount of the vehicle V …” ([0028]); “The target position storing unit 33 stores a position where the relative position of the target … is moved by the moved amount … as target position data …” ([0028]); “The self-position estimation unit 36 estimates a self-position … by comparing the selected target position data with the map information …” ([0029])
Rationale: Sano teaches downstream self-position estimation using intermediate estimated quantities (e.g., moved amount / processed target position data) together with sensor and map information. In the combined Claim 9 method, those intermediate quantities correspond to the recited partial parameter input used in the self-position estimation step.
Claim limitations Not Explicitly Disclosed by Sano
Sano does not explicitly disclose:
that is a part of six degrees-of-freedom parameters
represented by the six degrees-of-freedom parameters
in the step of acquiring first sensor data, estimating a position in the first axis direction extending in the horizontal direction and the posture represented by the rotation around the third axis extending in the vertical direction, among the six degrees-of-freedom parameters, based on a shape obtained by connecting the plurality of landmarks extracted from the first sensor data and a shape obtained from a disposition of the landmarks recorded in the known information, as the partial parameter.
Disclosure by Roumeliotis
Roumeliotis discloses:
that is a part of six degrees-of-freedom parameters / represented by the six degrees-of-freedom parameters
See at least: "track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform." ([0003]); "Estimator 22... compute state estimates for the degrees of freedom of VINS 10" ([0031])
Rationale: Roumeliotis expressly teaches estimation within a six degrees-of-freedom parameters framework.
Motivation to Combine Sano and Roumeliotis
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano and Roumeliotis before them, to implement Sano’s roadway-landmark-based localization within Roumeliotis’s explicit six-degrees-of-freedom pose framework. Sano teaches map-based localization using vehicle and environmental sensors, while Roumeliotis teaches an express 6-DoF state representation. A PHOSITA would combine them to enable the autonomous moving body to navigate more complex 3D environments where pitch and roll are relevant, resulting in a more robust and accurate 3D self-position estimate program.
Disclosure by Zhang
Zhang discloses:
in the step of acquiring first sensor data,
See at least: "analyze the image frame to identify a plurality of edge pixels" ([0113])
Rationale: Zhang teaches analyzing image data (which corresponds to the first sensor data) to identify landmarks as an initial processing step, thereby performing this action in the step of acquiring first sensor data.
estimating a position in the first axis direction extending in the horizontal direction
See at least: "edgels at the endpoints of the lane line segment... provide constraints on both dimensions (e.g. x and y directions)" ([0116])
Rationale: Zhang teaches determining localization constraints along specific dimensions (X and Y), which enables estimating a position in the first axis direction extending in the horizontal direction.
and the posture represented by the rotation around the third axis extending in the vertical direction,
See at least: "A pose of the vehicle is optimized … rigid 3D transform T" ([0126])
Rationale: Zhang optimizes the vehicle pose within a 3D transform structure. In combination with Roumeliotis's orientation framework, this teaches estimating ... the posture represented by the rotation around the third axis extending in the vertical direction (yaw).
among the six degrees-of-freedom parameters,
See at least: "track the six-degrees-of-freedom (d.o.f.) position and orientation (pose)" ([0003]) (Roumeliotis)
Rationale: The estimation occurs within the 6-DoF framework established by the combination, ensuring the parameters are among the six degrees-of-freedom parameters.
based on a shape
See at least: "line geometry is computed for certain groups of edgels" ([0119]); "clusters may be aligned along a line" ([0125])
Rationale: Zhang teaches utilizing geometric structure (shape) for matching.
obtained by connecting the plurality of landmarks
See at least: "line segments in 3D space connecting groups of edgels" ([0119])
Rationale: Zhang teaches forming line segments obtained by connecting the plurality of landmarks (edgels).
extracted from the first sensor data
See at least: "identify a plurality of edge pixels" ([0113])
Rationale: The landmarks are identified directly from image data (first sensor data), thus they are extracted from the first sensor data.
and a shape obtained from a disposition of the landmarks
See at least: "the localization system determines structure information [disposition] for an edgel cluster... used to help guide correspondence search." ([0125])
Rationale: Zhang utilizes the pre-recorded structural disposition (structure information) of landmarks to guide the matching process.
recorded in the known information,
See at least: "Landmark Map" (FIG. 2); "line geometry is computed … and stored as part of the map" ([0119])
Rationale: Zhang stores clustered edgels and line geometry in a map (known information), thus the landmark disposition is recorded in the known information.
as the partial parameter.
See at least: "minimize a distance between the edgels and their corresponding edge pixels." ([0007])
Rationale: Zhang teaches a shape/disposition-based pose component from roadway-feature matching, and in the combined Sano/Roumeliotis/Zhang program-executed localization pipeline, a PHOSITA would have used that horizontal/yaw pose component as the partial parameter input to the later self-position estimation step.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to incorporate Zhang’s line-geometry matching into the combined Sano/Roumeliotis 6-DoF localization system. A PHOSITA would further have implemented the feature extraction and shape-derivation operations from Zhang as part of the first-sensor-data acquisition/processing step in the combined method, consistent with the claimed procedural phrasing and with standard localization pipeline design. This integration would predictably improve the robustness of the partial-parameter estimation by providing high-precision geometric constraints, yielding more accurate results for a program causing a self-position estimation device to execute.
Regarding Claim 10,
Disclosure by Sano
Sano discloses:
A self-position estimation device
See at least: “self-position estimation device” (Abstract); “SELF POSITION-ESTIMATION UNIT 36” (FIG. 1 architecture)
Rationale: The reference expressly discloses a device architecture for self-position estimation, including a dedicated self-position estimation unit.
that estimates a self-position
See at least: “thereby estimating a self-position which is a current position of the moving object.” (Abstract); “The self-position estimation unit 36 estimates a self-position which is a current position of the vehicle V …” ([0029])
Rationale: Sano expressly discloses estimating a self-position/current position.
in a driving area
See at least: “target present on a road or around the road” (Abstract / [0029])
Rationale: The “road or around the road” environment where targets are detected corresponds to the claimed driving area.
of an autonomous moving body,
See at least: “The moving object is not limited to the vehicle V … but includes … other moving objects.” ([0064]); “automatic driving control” ([0026])
Rationale: The disclosure of a vehicle or moving object utilizing automatic driving control maps to the claimed autonomous moving body.
the self-position estimation device comprising:
See at least: “The processing unit 3 includes: a target position detection unit 31, a moved amount estimation unit 32... a self-position estimation unit 36...” ([0026])
Rationale: This is an express multi-unit device composition disclosure for the self-position estimation device.
a first acquisition unit
See at least: “surrounding sensor group 1” / “TARGET POSITION DETECTION UNIT 31” (FIG. 1 / [0021])
Rationale: The surrounding sensor group and target position detection unit function as a first acquisition unit.
that acquires first sensor data
See at least: “The target position detection unit 31 detects a relative position ... on the basis of a detection result of at least any one of the LRFs 101 and 102 and the cameras 201 and 202.” ([0027])
Rationale: The detection results from the external-facing cameras and LRFs constitute the first sensor data.
capable of detecting a fixed object
See at least: “curb 61 can be detected …” ([0030]); “detects white lines 62 and 63 … present at both sides of the vehicle V” ([0033]); “targets present on a road or around the road” (Abstract / [0029])
Rationale: Curbs and white lines are stationary road features, satisfying the fixed object limitation.
always present in a periphery of the autonomous moving body
See at least: “detects white lines 62 and 63 … present at both sides of the vehicle V” ([0033])
Rationale: Sano teaches fixed roadway features (e.g., lane lines/curbs) in the surroundings/periphery of the vehicle for localization, which discloses or at least renders obvious the recited fixed-object peripheral context corresponding to objects always present in a periphery for map-based self-position estimation.
and a non-fixed object
See at least: “target present in the surroundings of the vehicle V” ([0027]); “target attribute estimation unit 37 estimates an attribute of the target …” ([0029])
Rationale: Sano expressly detects general surrounding targets and performs attribute estimation/reliability filtering for localization. In a road-environment LRF/camera sensing stream, a PHOSITA would recognize that the detected targets necessarily include both fixed landmarks and transient non-fixed objects (e.g., vehicles or pedestrians); Sano's disclosure of filtering "unreliable" targets renders obvious the detection of non-fixed objects.
temporarily present in the periphery of the autonomous moving body;
See at least: “eliminate unreliable target” (FIG. 3, S11)
Rationale: Sano’s reliability-based elimination of targets in a road-scene localization pipeline predictably de-prioritizes detections that are not map-consistent. A PHOSITA would recognize that such detections include objects temporarily present in the periphery, rendering obvious the claimed limitation.
a second acquisition unit
See at least: “vehicle sensor group 5 includes …” ([0025])
Rationale: The vehicle sensor group and its outputs satisfy the second acquisition unit limitation.
that acquires second sensor data
See at least: “Each sensor 51 to 57 is connected to the processing unit 3 and is configured to sequentially output various detection results to the processing unit 3.” ([0025])
Rationale: The output of the vehicle sensors constitutes the second sensor data.
including an acceleration
See at least: “an acceleration sensor 56 …” ([0025])
Rationale: The system acquires data including an acceleration.
and an angular velocity
See at least: “other sensors, such as a yaw rate sensor” ([0025]); “inertia measurement method using a gyroscope” ([0025])
Rationale: Yaw rate/gyroscope measurements are measurements of angular velocity.
of the autonomous moving body;
See at least: “moved amount of the vehicle V” ([0025], [0028])
Rationale: Acceleration and angular velocity measurements are motion data of the autonomous moving body (vehicle).
a first estimation unit
See at least: “moved amount estimation unit 32” / “straight line extracting unit 34” ([0026])
Rationale: Sano’s moved amount estimation unit and straight line extracting unit collectively perform intermediate estimation/processing prior to the final self-position estimation unit, and thus correspond to the claimed first estimation unit in the combined architecture.
that estimates a partial parameter
See at least: “The moved amount estimation unit 32 detects an odometry which is a moved amount of the vehicle V …” ([0028])
Rationale: Sano’s odometry/moved amount and line-based intermediate processing are used before the final self-position estimation and therefore constitute intermediate estimated quantities, rendering obvious the claimed partial parameter. Sano supports the content and pipeline role of this intermediate estimate, while the explicit claimed first-estimation-unit partial-parameter architecture (including the six-degrees-of-freedom framing and landmark-based partial-parameter formulation) is supplied by the combination with Roumeliotis and Zhang.
representing a position
See at least: “X coordinate” / “Y coordinate” ([0054])
Rationale: The intermediate quantities include coordinates representing a position.
and a posture
See at least: “azimuth angle (yaw angle θ)” ([0054])
Rationale: Yaw represents the posture (heading) of the vehicle.
of the autonomous moving body
See at least: “position and an attitude angle ... of the vehicle V” ([0054])
Rationale: These parameters describe the state of the autonomous moving body.
in a three-dimensional coordinate system,
See at least: “The relative position detected by the target position detection unit 31 is a position in a vehicle coordinate system.” ([0027]); “a target actually having a three-dimensional structure such as curbs …” ([0024])
Rationale: Sano operates in a 3D environment and recognizes 3D structures, supporting the claimed three-dimensional coordinate system context.
based on known information,
See at least: “map information 41 …” ([0024]); “previously set in the…unit.” ([0027])
Rationale: The pre-stored map information is the known information.
in which information
See at least: “the storage unit 4 … store map information 41” ([0024])
Rationale: The storage unit holds the information in which landmarks are recorded.
including a position
See at least: “map information 41 including position information on targets” ([0024])
Rationale: The map records including a position for target objects.
in the driving area
See at least: “target present on a road or around the road” (Abstract)
Rationale: The map data covers the driving area (road).
of a mark
See at least: “targets (landmark) recorded in the map information 41” ([0024])
Rationale: A landmark in Sano's map functions as the claimed mark.
that is one fixed object
See at least: “curbs and white lines” ([0024])
Rationale: A curb or white line is one fixed object.
selected in advance
See at least: “map information 41 … previously stored” ([0024]); “The self-position estimation unit 36 estimates a self-position … by comparing the selected target position data with the map information …” ([0029])
Rationale: Sano’s localization operates by comparing runtime-detected target data against map information pre-stored before localization. A PHOSITA would understand that fixed map landmarks are selected in advance.
from a plurality of fixed objects
See at least: “map information 41 … curbs and white lines … median strips” ([0021], [0024])
Rationale: The map includes information selected from a plurality of fixed objects.
is recorded,
See at least: "The targets…recorded in the map information..." ([0024])
Rationale: The landmark information is recorded in the storage unit.
and the on first sensor data;
See at least: “comparing the selected target position data with the map information …” ([0029])
Rationale: The estimation is based on target detection derived from first sensor data (cameras/LRFs) matched to the map.
and a second estimation unit
See at least: “self-position estimation unit 36” ([0026])
Rationale: Unit 36 functions as the second estimation unit.
that estimates a self-position of the autonomous moving body
See at least: “The self-position estimation unit 36 estimates a self-position which is a current position of the vehicle V …” ([0029])
Rationale: This unit estimates a self-position of the autonomous moving body.
represented by the six degrees-of-freedom parameters
See at least: “estimates a position and an attitude angle of total three degrees of freedom composed of a position in the east-west direction … (X coordinate), a position in the north-south direction … (Y coordinate), and an azimuth angle (yaw angle θ)” ([0054])
Rationale: Sano expressly discloses position + posture for the vehicle, and the disclosed 3-DoF planar pose is a subset (part) of a 6-DoF pose representation as understood by a PHOSITA. The explicit framework for the state being represented by the six degrees-of-freedom parameters is supplied by Roumeliotis below.
based on the first sensor data,
See at least: “detects … on the basis of … LRFs … and … cameras” ([0027])
Rationale: The unit estimates the position based on the first sensor data.
the second sensor data,
See at least: “vehicle sensor group 5 … output various detection results” ([0025])
Rationale: The unit estimates the position based on the second sensor data.
and the partial parameter,
See at least: “target position storing unit 33 stores a position where the relative position of the target … is moved by the moved amount … as target position data” ([0028])
Rationale: Sano’s final estimate integrates first sensor data, second sensor data, and the partial parameter (moved amount).
Claim limitations Not Explicitly Disclosed by Sano
Sano does not explicitly disclose the following claim limitations:
that is a part of six degrees-of-freedom parameters
wherein the mark is a plurality of landmarks attached to a part of the fixed object
among the six degrees-of-freedom parameters
based on the plurality of landmarks extracted from the first sensor data and on the known information
as the partial parameter
Disclosure by Roumeliotis
Roumeliotis discloses:
that is a part of six degrees-of-freedom parameters represented by the six degrees-of-freedom parameters
See at least: "track the six-degrees-of-freedom (d.o.f.) position and orientation (pose) of a sensing platform." ([0003]); "Estimator 22... compute state estimates for the degrees of freedom of VINS 10" ([0031])
Rationale: Roumeliotis expressly teaches estimation within a six degrees-of-freedom parameters framework.
Motivation to Combine Sano and Roumeliotis
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano and Roumeliotis before them, to implement Sano’s roadway-landmark-based localization within Roumeliotis’s explicit six-degrees-of-freedom pose framework. Sano teaches map-based localization using environmental sensors, while Roumeliotis teaches an express 6-DoF state representation. A PHOSITA would combine them to enable the autonomous moving body to navigate environments where pitch and roll are relevant, resulting in a more robust and accurate self-position estimation in the combined device.
Disclosure by Zhang
Zhang discloses:
wherein the mark is a plurality of landmarks attached to a part of the fixed object
See at least: “edgels within a threshold distance from each other may be automatically clustered on the map” ([0119]); “edgels at the endpoints of the lane line segment” ([0116])
Rationale: Zhang teaches multiple mapped feature points (edgels) grouped on a common fixed roadway feature (e.g., a lane line segment). While not verbatim recitations, a PHOSITA would recognize that representing multiple local feature points on a single structure is an obvious implementation equivalent of a plurality of landmarks attached to a part of the fixed object.
the first estimation unit estimates a position in each of a first axis direction and a second axis direction extending in a horizontal direction
See at least: "edgels at the endpoints of the lane line segment... provide constraints on both dimensions (e.g. x and y directions)" ([0116])
Rationale: Zhang teaches determining localization constraints along specific dimensions (X and Y), which enables estimating a position in each of a first axis direction and a second axis direction extending in a horizontal direction.
and a posture represented by rotation around a third axis extending in a vertical direction,
See at least: "A pose of the vehicle is optimized … rigid 3D transform T" ([0126])
Rationale: Zhang optimizes the vehicle pose within a 3D transform structure. In combination with Roumeliotis's orientation framework, this teaches estimating ... the posture represented by rotation around a third axis extending in a vertical direction (yaw).
among the six degrees-of-freedom parameters,
See at least: "track the six-degrees-of-freedom (d.o.f.) position and orientation (pose)" ([0003]) (Roumeliotis)
Rationale: The estimation occurs within the 6-DoF framework established by the combination, ensuring the parameters are among the six degrees-of-freedom parameters.
based on the plurality of landmarks extracted from the first sensor data
See at least: “identify a plurality of edge pixels” ([0113]); “analyze the image frame to identify a plurality of edge pixels” ([0113])
Rationale: Zhang teaches identifying landmarks (edge pixels/edgels) from image data (first sensor data), providing the basis for the estimate based on the plurality of landmarks extracted from the first sensor data.
and on the known information,
See at least: "Landmark Map" (FIG. 2); "line geometry is computed … and stored as part of the map" ([0119])
Rationale: Zhang stores clustered edgels and line geometry in a map (known information), thus the estimation is performed on the known information.
as the partial parameter.
See at least: "minimize a distance between the edgels and their corresponding edge pixels." ([0007])
Rationale: Zhang teaches a pose component from roadway-feature matching, and in the combined Sano/Roumeliotis/Zhang localization pipeline, a PHOSITA would have used that horizontal/yaw pose component as the partial parameter input to the later self-position estimation step.
Motivation to Combine Sano, Roumeliotis, and Zhang
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Sano, Roumeliotis, and Zhang before them, to incorporate Zhang’s line-geometry matching and edgel clustering into the combined Sano/Roumeliotis 6-DoF localization system. Zhang teaches an analogous way of deriving geometric constraints from roadway features to optimize vehicle pose. Integrating these specific shape-based constraints into the map-matching pipeline would predictably improve the robustness of the partial-parameter estimation by providing high-precision geometric constraints, yielding more accurate results for an autonomous moving body.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWABUSAYO ADEBANJO AWORUNSE whose telephone number is (571)272-4311. The examiner can normally be reached M - F (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at (571) 270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWABUSAYO ADEBANJO AWORUNSE/Examiner, Art Unit 3662
/JELANI A SMITH/Supervisory Patent Examiner, Art Unit 3662