DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed on 07/19/2024.
Claims 1-20 are currently pending and have been examined.
Claims 1-20 are currently rejected.
This action is made NON-FINAL.
Specification
The abstract of the disclosure is objected to because of the use of an abbreviation (RMSE) without first establishing what the abbreviation stands for. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1-20 are directed to a system, method, or product, which are/is one of the statutory categories of invention. (Step 1: YES)
The examiner has identified independent system/method/product Claim 1 as the claim that represents the claimed invention for analysis and is similar to independent claims 10 and 18. Claim 1 recites the limitations of:
A system, comprising:
one or more processors in communication with one or more memories, one or more memories including instructions executable by the one or more processors to:
access, at a connected device of a plurality of connected devices, sensor measurements from a plurality of sensors onboard the connected device;
generate, at the connected device, an observation covariance matrix for the sensor measurements based a measured distance between each sensor and an observed location of an object;
determine, for each connected device of the plurality of connected devices, a local fused sensor measurement using the observation covariance matrix;
generate a localization covariance matrix using local fused sensor measurements associated with the plurality of connected devices and measured velocities of the plurality of connected devices with respect to a global coordinate system; and
determine, at an external computing platform in communication with the plurality of connected devices, a global fused sensor measurement indicating a location of the object using the localization covariance matrix.
These limitations, under their broadest reasonable interpretation, cover performance of the limitation as mental processes. Determining a global fused sensor value by using matrix multiplication recites concepts performed in the human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as a concept performed in the human mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. (Step 2A-Prong 1: YES. The claims recite an abstract idea.)
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of: the system including a processor and memory in Claim 1 is just applying generic computer components to the recited abstract limitations. The computer hardware/software is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than instructions to apply the exception using a generic computer component. The additional elements of gathering data from sensors are insignificant extra-solution activity. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea without a practical application because they do not impose any meaningful limits on practicing the abstract idea and are at a high level of generality. Therefore, claims 1, 10, and 18 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application.)
The claims do not include additional elements that are sufficient to amount to significantly more that the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer hardware amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are presented at a high level of generality that does not preclude the ability of a person with pen and paper to be able to perform the claims steps in their mind using the same sensor data. Additionally there is no practical application performed upon completion of the mental process such as controlling a vehicle according to the global fused sensor measurements. Accordingly, these additional elements, do not change the outcome of the analysis, when considered separately and as an ordered combination. Thus, claims 1, 10, and 18 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more.)
Dependent claims further define the abstract idea that is present in their respective independent claims 1, 10, and 18 and thus correspond to Mental Processes and hence are abstract for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the claims 1-20 are not patent-eligible.
Examiner suggest amending the claims to incorporate a subject matter expressing a practical application or demonstrating that the claims cannot be practically performed in the human mind.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 3, 15, and 16 recites the limitation “a plurality of sensors”. Claim 1, upon which claims 2 and 3 are dependent and claim 10, upon which claims 15 and 16 are dependent also recite “a plurality of sensors”. It is unclear if the inventor is referring to the same “plurality of sensors” or two different “plurality of sensors”. To overcome this rejection the examiner suggest changing the second recitation of “a plurality of sensors” to “the plurality of sensors”. For the purposes of examination, the examiner is interpreting the recitations to be referring to the same “plurality of sensors”.
Claims 6 and 12 recites the limitation “a local Joint Probability Data Association Filter” in both lines 4 and 9. It is unclear if the inventor is referring to the same “local Joint Probability Data Association Filter” or two different “local Joint Probability Data Association Filter”. To overcome this rejection the examiner suggest changing the second recitation of “a local Joint Probability Data Association Filter” to “the local Joint Probability Data Association Filter”. For the purposes of examination, the examiner is interpreting the recitations to be referring to the same “local Joint Probability Data Association Filter”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 4-5, 7, 9-11, 13, 15, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni et. al. (US 2023/0324543), herein Kulkarni in view of Takabayashi et. al. (US 2018/0182245), herein Takabayashi.
Regarding claim 1:
Kulkarni teaches:
A system (fig. 12, computer system 1200), comprising:
one or more processors (fig. 12, processor 1210) in communication with one or more memories (fig. 12, memory 1260), one or more memories including instructions executable by the one or more processors (The memory 1260 of the mobile computing system 1200 also can comprise software elements (not shown in FIG. 12), including an operating system, device drivers, executable libraries, and/or other code, such as one or more application programs, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein [0119]) to:
access, at a connected device (fig. 2, position estimation system 200) of a plurality of connected devices (many vehicles may each perform the functionality at block 305 simultaneously and/or at different times while in the geographical region [0042]), sensor measurements from a plurality of sensors onboard the connected device (The position estimation system 200 comprises sensors 205 including one or more cameras 210, an inertial measurement unit (IMU) 220, a GNSS unit 230, and radar 235 [0031]);
generate, at the connected device, an observation covariance matrix (This may be determined, for example, based on the covariance matrix of the pose [0061]) for the sensor measurements based a measured distance between each sensor and an observed location of an object (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]);
determine, for each connected device of the plurality of connected devices, a local fused sensor measurement (The sensor positioning unit 270 may comprise a module (implemented in software and/or hardware) that is configured to fuse data from the sensors 205 to determine a position of the vehicle [0037]) using the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]);
generate a [localization covariance matrix] using local fused sensor measurements (the server can then process the aggregated data to determine off-line trajectory and map optimization, as indicated at block 380 [0050]) associated with the plurality of connected devices (These operations may include performing the functionality illustrated at block 370, where the server unifies information received from multiple vehicles. As noted, the server may be communicatively coupled with many vehicles in a particular geographical region. Using the functionality illustrated in block 305, each of these vehicles may publish information to the server [0048]) and measured velocities of the plurality of connected devices with respect to a global coordinate system (Doppler check 520 and ego speed check 530 may be performed on a per-frame basis, which can filter out unwanted or noisy points. As a person of ordinary skill in the art may appreciate, Doppler check 520 may perform better filtering of moving objects at higher speeds, whereas ego speed check 530 may perform better filtering of moving objects at lower speeds. The optional metadata check 540 may comprise a check of certain metrics (e.g., signal to noise ratio (SNR), radar cross-section (RCS), specific indicators on multipath targets, and the like) to determine a reliability of the radar data and filter out data failing to meet a minimum threshold [0065]); and
determine, at an external computing platform in communication with the plurality of connected devices (the functionality illustrated in block 310 may be performed by a server (e.g., a cloud/edge server) [0042]), a global fused sensor measurement indicating a location of the object (Location determination and/or other determinations based on wireless communication may be provided in the processor(s) 1210 and/or wireless communication interface 1230 (discussed below) [0112])
Kulkarni does not explicitly teach, however Takabayashi teaches:
generate a localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]) using local fused sensor measurements associated with the plurality of connected devices (a communication device that receives GPS positions of the surrounding vehicles and the pedestrians [0026]) and measured velocities of the plurality of connected devices with respect to a global coordinate system (the route prediction system 100 includes the observation unit 1 to observe positions and speeds of a host vehicle and vehicles surrounding the host vehicle [0063]); and
using the localization covariance matrix (likelihood calculation unit 4 calculates a hypothesis likelihood on the basis of the estimated value and the estimated error covariance matrix outputted from the tracking processing unit 6 [0099]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni to include the teachings as taught by Takabayashi with a reasonable expectation of success. Kulkarni teaches the ability to use covariance matrices to fuse sensor data on a vehicle to get the most accurate data and then passes that data along to a server to aggregate with the data from other vehicles to determine the most accurate global positioning data to then update the map with. Kulkarni does not explicitly teach using a second covariance matrix for aggregating the data from multiple vehicles to achieve the global data. However sine Kulkarni already benefits from the use of a covariance matrix it would have been obvious to apply a second covariance matrix as claimed in light of the teachings of Takabayashi which uses a covariance matrix for aggregating the data from multiple vehicles. Takabayashi also teaches the benefits of “an observation unit to observe a position of a host vehicle and positions and speeds of vehicles surrounding the host vehicle, a vehicle detection unit to detect the host vehicle and at least two of the surrounding vehicles having collision possibilities on the basis of observation results observed by the observation unit, a hypothesis generation unit to generate plural hypotheses for the at least two of the surrounding vehicles detected by the vehicle detection unit to avoid collision, a likelihood calculation unit to calculate a likelihood indicating probability of occurrence of each of the plural hypotheses generated by the hypothesis generation unit, and a predicted route analysis unit to analyze, on the basis of the likelihood calculated by the likelihood calculation unit, predicted routes of the at least two of the surrounding vehicles, and output the analysis result [Takabayashi. 0010]”.
Regarding claim 2:
Kulkarni in view of Takabayashi teaches all the limitations of claim 1, upon which this claims is dependent.
Kulkarni further teaches:
wherein the connected device is a connected autonomous vehicle (it may be provided to autonomous driving, ADAS, and/or other systems of the vehicle 110 [0039]) having a plurality of sensors operable for generating the sensor measurements (sensors 205 may include one or more additional or alternative sensors (e.g., lidar, sonar, etc.) [0031]).
Regarding Claim 4:
Kulkarni in view of Takabayashi teaches all the limitations of claim 1, upon which this claims is dependent.
Kulkarni further teaches:
the one or more memories further including instructions executable by the one or more processors (The memory 1260 of the mobile computing system 1200 also can comprise software elements (not shown in FIG. 12), including an operating system, device drivers, executable libraries, and/or other code, such as one or more application programs, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein [0119]) to:
transmit, by the connected device and to the external computing platform, a data packet having an instance of locally-fused observation data (The switch 365 may comprise a logical block that publishes the metadata, processed camera data, and processed radar data to the server (as shown at arrow 335) if the publish flag 330 is ON (i.e., activated [0046]), the locally-fused observation data including:
a local fused object location of the object relative to the global coordinate system (At block 373, the server may then aggregate the data, storing data published by multiple vehicles over window of time. The server may continue to aggregate this data until receiving an update trigger, as shown by arrow 375. Depending on desired functionality, the trigger may cause the server to aggregate data across a periodic window of time and/or until a certain event occurs. The map update trigger may therefore comprise a periodic trigger (e.g., once every six hours, once every 12 hours, once every day, etc.) and/or an event-based trigger (e.g., based on a determination that the quality of an HD map layer is below a threshold and needs to be updated, based on input received from one or more vehicles, etc.) [0049]);
Takabayashi further teaches:
a corrected localization covariance matrix (Smoothed error covariance matrix of target tgti at sampling time k [0058] P.sub.p,k+n.sup.(tgti): Predicted error covariance matrix of target tgti after n steps at sampling time k [0058]) for the object as observed by the connected device which incorporates the observation covariance matrix and the localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]);
a sensor platform position associated with the connected device relative to the global coordinate system (The observation unit 1 measures a position of a host vehicle and positions and speeds of surrounding vehicles and pedestrians using, for example, sensors, such as a millimeter wave radar, a laser radar, an optical camera, and an infrared camera, and a communication device that receives GPS positions of the surrounding vehicles and the pedestrians [0026]); and
the localization covariance matrix for the connected device (in the calculation of the hypothesis likelihood, it is also allowed, with regard to the predicted error covariance matrix after n steps, to set an overlap of the predicted error distributions of the vehicles as an index, and use the reciprocal of the magnitude of the overlap. For example, the probability distributions of the vehicles i and j are expressed as in FIG. 7 with the horizontal axis as a position and the vertical axis as a probability density function based on error distributions [0061]).
Regarding Claim 5:
Kulkarni in view of Takabayashi teaches all the limitations of claim 1, upon which this claims is dependent.
Kulkarni further teaches:
the memory further including instructions executable by the processor to:
associate, at the connected device, two or more instances of the sensor measurements for the object with one another (The sensor positioning unit 270 may comprise a module (implemented in software and/or hardware) that is configured to fuse data from the sensors 205 to determine a position of the vehicle [0037]); and
fuse, at the connected device, the two or more instances of the sensor measurements for the object into the local fused sensor measurement using the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]).
Regarding Claim 7:
Kulkarni in view of Takabayashi teaches all the limitations of claim 1, upon which this claims is dependent.
Kulkarni further teaches:
the memory further including instructions executable by the processor to:
associate, at the external computing platform, two or more instances of locally-fused observation data for the object with one another (performing the functionality illustrated at block 370, where the server unifies information received from multiple vehicles. As noted, the server may be communicatively coupled with many vehicles in a particular geographical region. Using the functionality illustrated in block 305, each of these vehicles may publish information to the server. However, the sensor types and specifications may differ between vehicles, resulting in sensor data having different coordinate information, scale, format, etc. Thus, the unification performed at block 370 may comprise unifying this data to a common coordinates system, scale, format, etc. [0048]); and
fuse, at the external computing platform, the two or more instances of the locally-fused observation data for the object into the local fused sensor measurement (At block 373, the server may then aggregate the data, storing data published by multiple vehicles over window of time. The server may continue to aggregate this data until receiving an update trigger, as shown by arrow 375. Depending on desired functionality, the trigger may cause the server to aggregate data across a periodic window of time and/or until a certain event occurs. The map update trigger may therefore comprise a periodic trigger (e.g., once every six hours, once every 12 hours, once every day, etc.) and/or an event-based trigger (e.g., based on a determination that the quality of an HD map layer is below a threshold and needs to be updated, based on input received from one or more vehicles, etc.) [0049])
Takabayashi further teaches:
using the localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]).
Regarding claim 9:
Kulkarni in view of Takabayashi teaches all the limitations of claim 1, upon which this claims is dependent.
Takabayashi further teaches:
the localization covariance matrix (likelihood calculation unit 4 calculates a hypothesis likelihood on the basis of the estimated value and the estimated error covariance matrix outputted from the tracking processing unit 6 [0099])
Kulkarni further teaches:
incorporating the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]).
Regarding claim 10:
Kulkarni teaches:
A method (FIG. 3 is a high-level block diagram of a method of obtaining radar and camera data for radar and camera map layers of an HD map for a geographical region [0042]), comprising:
accessing at a connected device (fig. 2, position estimation system 200) of a plurality of connected devices (many vehicles may each perform the functionality at block 305 simultaneously and/or at different times while in the geographical region [0042]), sensor measurements from a plurality of sensors onboard the connected device (The position estimation system 200 comprises sensors 205 including one or more cameras 210, an inertial measurement unit (IMU) 220, a GNSS unit 230, and radar 235 [0031]);
generating an observation covariance matrix (This may be determined, for example, based on the covariance matrix of the pose [0061]) for the sensor measurements based a measured distance between each sensor and an observed location of an object (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]);
determining, for each connected device of the plurality of connected devices, a local fused sensor measurement (The sensor positioning unit 270 may comprise a module (implemented in software and/or hardware) that is configured to fuse data from the sensors 205 to determine a position of the vehicle [0037]) using the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]);
generating a [localization covariance matrix] using local fused sensor measurements (the server can then process the aggregated data to determine off-line trajectory and map optimization, as indicated at block 380 [0050]) associated with the plurality of connected devices (These operations may include performing the functionality illustrated at block 370, where the server unifies information received from multiple vehicles. As noted, the server may be communicatively coupled with many vehicles in a particular geographical region. Using the functionality illustrated in block 305, each of these vehicles may publish information to the server [0048]) and measured velocities of the plurality of connected devices with respect to a global coordinate system (Doppler check 520 and ego speed check 530 may be performed on a per-frame basis, which can filter out unwanted or noisy points. As a person of ordinary skill in the art may appreciate, Doppler check 520 may perform better filtering of moving objects at higher speeds, whereas ego speed check 530 may perform better filtering of moving objects at lower speeds. The optional metadata check 540 may comprise a check of certain metrics (e.g., signal to noise ratio (SNR), radar cross-section (RCS), specific indicators on multipath targets, and the like) to determine a reliability of the radar data and filter out data failing to meet a minimum threshold [0065]); and
determining, at an external computing platform in communication with the plurality of connected devices (the functionality illustrated in block 310 may be performed by a server (e.g., a cloud/edge server) [0042]), a global fused sensor measurement indicating a location of the object (Location determination and/or other determinations based on wireless communication may be provided in the processor(s) 1210 and/or wireless communication interface 1230 (discussed below) [0112])
Kulkarni does not explicitly teach, however Takabayashi teaches:
generating a localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]) using local fused sensor measurements associated with the plurality of connected devices (a communication device that receives GPS positions of the surrounding vehicles and the pedestrians [0026]) and measured velocities of the plurality of connected devices with respect to a global coordinate system (the route prediction system 100 includes the observation unit 1 to observe positions and speeds of a host vehicle and vehicles surrounding the host vehicle [0063]); and
using the localization covariance matrix (likelihood calculation unit 4 calculates a hypothesis likelihood on the basis of the estimated value and the estimated error covariance matrix outputted from the tracking processing unit 6 [0099]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni to include the teachings as taught by Takabayashi with a reasonable expectation of success. Kulkarni teaches the ability to use covariance matrices to fuse sensor data on a vehicle to get the most accurate data and then passes that data along to a server to aggregate with the data from other vehicles to determine the most accurate global positioning data to then update the map with. Kulkarni does not explicitly teach using a second covariance matrix for aggregating the data from multiple vehicles to achieve the global data. However sine Kulkarni already benefits from the use of a covariance matrix it would have been obvious to apply a second covariance matrix as claimed in light of the teachings of Takabayashi which uses a covariance matrix for aggregating the data from multiple vehicles. Takabayashi also teaches the benefits of “an observation unit to observe a position of a host vehicle and positions and speeds of vehicles surrounding the host vehicle, a vehicle detection unit to detect the host vehicle and at least two of the surrounding vehicles having collision possibilities on the basis of observation results observed by the observation unit, a hypothesis generation unit to generate plural hypotheses for the at least two of the surrounding vehicles detected by the vehicle detection unit to avoid collision, a likelihood calculation unit to calculate a likelihood indicating probability of occurrence of each of the plural hypotheses generated by the hypothesis generation unit, and a predicted route analysis unit to analyze, on the basis of the likelihood calculated by the likelihood calculation unit, predicted routes of the at least two of the surrounding vehicles, and output the analysis result [Takabayashi. 0010]”.
Regarding Claim 11:
Kulkarni in view of Takabayashi teaches all the limitations of claim 10, upon which this claims is dependent.
Kulkarni further teaches:
associating, at the connected device, two or more instances of the sensor measurements for the object with one another (The sensor positioning unit 270 may comprise a module (implemented in software and/or hardware) that is configured to fuse data from the sensors 205 to determine a position of the vehicle [0037]); and
fusing, at the connected device, the two or more instances of the sensor measurements for the object into the local fused sensor measurement using the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]).
Regarding Claim 13:
Kulkarni in view of Takabayashi teaches all the limitations of claim 10, upon which this claims is dependent.
Kulkarni further teaches:
associating, at the external computing platform, two or more instances of locally-fused observation data for the object with one another (performing the functionality illustrated at block 370, where the server unifies information received from multiple vehicles. As noted, the server may be communicatively coupled with many vehicles in a particular geographical region. Using the functionality illustrated in block 305, each of these vehicles may publish information to the server. However, the sensor types and specifications may differ between vehicles, resulting in sensor data having different coordinate information, scale, format, etc. Thus, the unification performed at block 370 may comprise unifying this data to a common coordinates system, scale, format, etc. [0048]); and
fusing, at the external computing platform, the two or more instances of the locally-fused observation data for the object into the local fused sensor measurement (At block 373, the server may then aggregate the data, storing data published by multiple vehicles over window of time. The server may continue to aggregate this data until receiving an update trigger, as shown by arrow 375. Depending on desired functionality, the trigger may cause the server to aggregate data across a periodic window of time and/or until a certain event occurs. The map update trigger may therefore comprise a periodic trigger (e.g., once every six hours, once every 12 hours, once every day, etc.) and/or an event-based trigger (e.g., based on a determination that the quality of an HD map layer is below a threshold and needs to be updated, based on input received from one or more vehicles, etc.) [0049])
Takabayashi further teaches:
using the localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]).
Regarding claim 15:
Kulkarni in view of Takabayashi teaches all the limitations of claim 10, upon which this claims is dependent.
Kulkarni further teaches:
the connected device being a connected autonomous vehicle (it may be provided to autonomous driving, ADAS, and/or other systems of the vehicle 110 [0039]) having a plurality of sensors operable for generating the sensor measurements (sensors 205 may include one or more additional or alternative sensors (e.g., lidar, sonar, etc.) [0031]).
Regarding Claim 17:
Kulkarni in view of Takabayashi teaches all the limitations of claim 10, upon which this claims is dependent.
Kulkarni further teaches:
transmitting, by the connected device and to the external computing platform, a data packet having an instance of locally-fused observation data (The switch 365 may comprise a logical block that publishes the metadata, processed camera data, and processed radar data to the server (as shown at arrow 335) if the publish flag 330 is ON (i.e., activated [0046]), the locally-fused observation data including:
a local fused object location of the object relative to the global coordinate system (At block 373, the server may then aggregate the data, storing data published by multiple vehicles over window of time. The server may continue to aggregate this data until receiving an update trigger, as shown by arrow 375. Depending on desired functionality, the trigger may cause the server to aggregate data across a periodic window of time and/or until a certain event occurs. The map update trigger may therefore comprise a periodic trigger (e.g., once every six hours, once every 12 hours, once every day, etc.) and/or an event-based trigger (e.g., based on a determination that the quality of an HD map layer is below a threshold and needs to be updated, based on input received from one or more vehicles, etc.) [0049]);
Takabayashi further teaches:
a corrected localization covariance matrix (Smoothed error covariance matrix of target tgti at sampling time k [0058] P.sub.p,k+n.sup.(tgti): Predicted error covariance matrix of target tgti after n steps at sampling time k [0058]) for the object as observed by the connected device which incorporates the observation covariance matrix and the localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]);
a sensor platform position associated with the connected device relative to the global coordinate system (The observation unit 1 measures a position of a host vehicle and positions and speeds of surrounding vehicles and pedestrians using, for example, sensors, such as a millimeter wave radar, a laser radar, an optical camera, and an infrared camera, and a communication device that receives GPS positions of the surrounding vehicles and the pedestrians [0026]); and
the localization covariance matrix for the connected device (in the calculation of the hypothesis likelihood, it is also allowed, with regard to the predicted error covariance matrix after n steps, to set an overlap of the predicted error distributions of the vehicles as an index, and use the reciprocal of the magnitude of the overlap. For example, the probability distributions of the vehicles i and j are expressed as in FIG. 7 with the horizontal axis as a position and the vertical axis as a probability density function based on error distributions [0061]).
Regarding claim 18:
Kulkarni teaches:
One or more non-transitory computer-readable media (fig. 12, memory 1260) including instructions executable (The memory 1260 of the mobile computing system 1200 also can comprise software elements (not shown in FIG. 12), including an operating system, device drivers, executable libraries, and/or other code, such as one or more application programs, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein [0119]) by one or more processors (fig. 12, processor 1210) to:
A system (fig. 12, computer system 1200), comprising:
access, at a connected device (fig. 2, position estimation system 200) of a plurality of connected devices (many vehicles may each perform the functionality at block 305 simultaneously and/or at different times while in the geographical region [0042]), sensor measurements from a plurality of sensors onboard the connected device (The position estimation system 200 comprises sensors 205 including one or more cameras 210, an inertial measurement unit (IMU) 220, a GNSS unit 230, and radar 235 [0031]);
generate, at the connected device, an observation covariance matrix (This may be determined, for example, based on the covariance matrix of the pose [0061]) for the sensor measurements based a measured distance between each sensor and an observed location of an object (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]);
determine, for each connected device of the plurality of connected devices, a local fused sensor measurement (The sensor positioning unit 270 may comprise a module (implemented in software and/or hardware) that is configured to fuse data from the sensors 205 to determine a position of the vehicle [0037]) using the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]);
generate a [localization covariance matrix] using local fused sensor measurements (the server can then process the aggregated data to determine off-line trajectory and map optimization, as indicated at block 380 [0050]) associated with the plurality of connected devices (These operations may include performing the functionality illustrated at block 370, where the server unifies information received from multiple vehicles. As noted, the server may be communicatively coupled with many vehicles in a particular geographical region. Using the functionality illustrated in block 305, each of these vehicles may publish information to the server [0048]) and measured velocities of the plurality of connected devices with respect to a global coordinate system (Doppler check 520 and ego speed check 530 may be performed on a per-frame basis, which can filter out unwanted or noisy points. As a person of ordinary skill in the art may appreciate, Doppler check 520 may perform better filtering of moving objects at higher speeds, whereas ego speed check 530 may perform better filtering of moving objects at lower speeds. The optional metadata check 540 may comprise a check of certain metrics (e.g., signal to noise ratio (SNR), radar cross-section (RCS), specific indicators on multipath targets, and the like) to determine a reliability of the radar data and filter out data failing to meet a minimum threshold [0065]); and
determine, at an external computing platform in communication with the plurality of connected devices (the functionality illustrated in block 310 may be performed by a server (e.g., a cloud/edge server) [0042]), a global fused sensor measurement indicating a location of the object (Location determination and/or other determinations based on wireless communication may be provided in the processor(s) 1210 and/or wireless communication interface 1230 (discussed below) [0112])
Kulkarni does not explicitly teach, however Takabayashi teaches:
generate a localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]) using local fused sensor measurements associated with the plurality of connected devices (a communication device that receives GPS positions of the surrounding vehicles and the pedestrians [0026]) and measured velocities of the plurality of connected devices with respect to a global coordinate system (the route prediction system 100 includes the observation unit 1 to observe positions and speeds of a host vehicle and vehicles surrounding the host vehicle [0063]); and
using the localization covariance matrix (likelihood calculation unit 4 calculates a hypothesis likelihood on the basis of the estimated value and the estimated error covariance matrix outputted from the tracking processing unit 6 [0099]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni to include the teachings as taught by Takabayashi with a reasonable expectation of success. Kulkarni teaches the ability to use covariance matrices to fuse sensor data on a vehicle to get the most accurate data and then passes that data along to a server to aggregate with the data from other vehicles to determine the most accurate global positioning data to then update the map with. Kulkarni does not explicitly teach using a second covariance matrix for aggregating the data from multiple vehicles to achieve the global data. However sine Kulkarni already benefits from the use of a covariance matrix it would have been obvious to apply a second covariance matrix as claimed in light of the teachings of Takabayashi which uses a covariance matrix for aggregating the data from multiple vehicles. Takabayashi also teaches the benefits of “an observation unit to observe a position of a host vehicle and positions and speeds of vehicles surrounding the host vehicle, a vehicle detection unit to detect the host vehicle and at least two of the surrounding vehicles having collision possibilities on the basis of observation results observed by the observation unit, a hypothesis generation unit to generate plural hypotheses for the at least two of the surrounding vehicles detected by the vehicle detection unit to avoid collision, a likelihood calculation unit to calculate a likelihood indicating probability of occurrence of each of the plural hypotheses generated by the hypothesis generation unit, and a predicted route analysis unit to analyze, on the basis of the likelihood calculated by the likelihood calculation unit, predicted routes of the at least two of the surrounding vehicles, and output the analysis result [Takabayashi. 0010]”.
Regarding Claim 19:
Kulkarni in view of Takabayashi teaches all the limitations of claim 18, upon which this claims is dependent.
Kulkarni further teaches:
associate, at the connected device, two or more instances of the sensor measurements for the object with one another (The sensor positioning unit 270 may comprise a module (implemented in software and/or hardware) that is configured to fuse data from the sensors 205 to determine a position of the vehicle [0037]); and
fuse, at the connected device, the two or more instances of the sensor measurements for the object into the local fused sensor measurement using the observation covariance matrix (the determination that the confidence metric of the 6-DOF position estimate of the vehicle exceeds the confidence metric threshold level may be based on a covariance matrix of the 6-DOF position estimate (e.g., each 6-DOF in ego pose state has standard deviation which is less than a particular threshold) [0095]).
Regarding Claim 20:
Kulkarni in view of Takabayashi teaches all the limitations of claim 18, upon which this claims is dependent.
Kulkarni further teaches:
associate, at the external computing platform, two or more instances of locally-fused observation data for the object with one another (performing the functionality illustrated at block 370, where the server unifies information received from multiple vehicles. As noted, the server may be communicatively coupled with many vehicles in a particular geographical region. Using the functionality illustrated in block 305, each of these vehicles may publish information to the server. However, the sensor types and specifications may differ between vehicles, resulting in sensor data having different coordinate information, scale, format, etc. Thus, the unification performed at block 370 may comprise unifying this data to a common coordinates system, scale, format, etc. [0048]); and
fuse, at the external computing platform, the two or more instances of the locally-fused observation data for the object into the local fused sensor measurement (At block 373, the server may then aggregate the data, storing data published by multiple vehicles over window of time. The server may continue to aggregate this data until receiving an update trigger, as shown by arrow 375. Depending on desired functionality, the trigger may cause the server to aggregate data across a periodic window of time and/or until a certain event occurs. The map update trigger may therefore comprise a periodic trigger (e.g., once every six hours, once every 12 hours, once every day, etc.) and/or an event-based trigger (e.g., based on a determination that the quality of an HD map layer is below a threshold and needs to be updated, based on input received from one or more vehicles, etc.) [0049])
Takabayashi further teaches:
using the localization covariance matrix (estimated error covariance matrixes of the positions and speeds (S103) [0032]).
Claim(s) 3 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni et. al. (US 2023/0324543), herein Kulkarni in view of Takabayashi et. al. (US 2018/0182245), herein Takabayashi in further view of Vijaya et. al. (US 2023/0162602), herein Vijaya.
Regarding Claim 3:
Kulkarni in view of Takabayashi teaches all the limitations of claim 1, upon which this claims is dependent.
Kulkarni in view of Takabayashi does not explicitly teach, however Vijaya teaches:
wherein the connected device includes a connected infrastructure sensor having a plurality of sensors operable for generating the sensor measurements (sensed perception data from a perception device such as an infrastructure camera [0038]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni and Takabayashi to include the teachings as taught by Vijaya with a reasonable expectation of success. Vijaya teaches the benefit if “a communication system that determines a context and an intent of a specific remote vehicle located in a surrounding environment of a host vehicle is disclosed. The communication system includes one or more controllers for receiving sensed perception data related to the specific remote vehicle. The one or more controllers execute instructions to determine a plurality of vehicle parameters related to the specific remote vehicle based on the sensed perception data. The one or more controllers associate the specific remote vehicle with a specific lane of travel of a roadway based on map data, where the map data indicates information related to lanes of travel of the roadway that the specific remote vehicle is traveling along. The one or more controllers determine possible maneuvers, possible egress lanes, and a speed limit for the specific remote vehicle for the specific lane of travel based on the map data. Finally, the one or more controllers determines the context and the intent of the specific remote vehicle based on the plurality of vehicle parameters, the possible maneuvers, the possible egress lanes for the specific remote vehicle, and the speed limit related to the specific remote vehicle [Vijaya, 0005]”.
Regarding Claim 16:
Kulkarni in view of Takabayashi teaches all the limitations of claim 10, upon which this claims is dependent.
Kulkarni in view of Takabayashi does not explicitly teach, however Vijaya teaches:
the connected device being a connected infrastructure sensor having a plurality of sensors operable for generating the sensor measurements (sensed perception data from a perception device such as an infrastructure camera [0038]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni and Takabayashi to include the teachings as taught by Vijaya with a reasonable expectation of success. Vijaya teaches the benefit if “a communication system that determines a context and an intent of a specific remote vehicle located in a surrounding environment of a host vehicle is disclosed. The communication system includes one or more controllers for receiving sensed perception data related to the specific remote vehicle. The one or more controllers execute instructions to determine a plurality of vehicle parameters related to the specific remote vehicle based on the sensed perception data. The one or more controllers associate the specific remote vehicle with a specific lane of travel of a roadway based on map data, where the map data indicates information related to lanes of travel of the roadway that the specific remote vehicle is traveling along. The one or more controllers determine possible maneuvers, possible egress lanes, and a speed limit for the specific remote vehicle for the specific lane of travel based on the map data. Finally, the one or more controllers determines the context and the intent of the specific remote vehicle based on the plurality of vehicle parameters, the possible maneuvers, the possible egress lanes for the specific remote vehicle, and the speed limit related to the specific remote vehicle [Vijaya, 0005]”.
Claim(s) 6, 8, 12, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni et. al. (US 2023/0324543), herein Kulkarni in view of Takabayashi et. al. (US 2018/0182245), herein Takabayashi in further view of Hyung et. al. (US 2012/0158178), herein Hyung.
Regarding Claim 6:
Kulkarni in view of Takabayashi teaches all the limitations of claim 5, upon which this claims is dependent.
Kulkarni further teaches:
the memory further including instructions executable by the processor to:
apply, for an observation of a plurality of observations of the plurality of sensors onboard the connected device (The position estimation system 200 comprises sensors 205 including one or more cameras 210, an inertial measurement unit (IMU) 220, a GNSS unit 230, and radar 235 [0031]),
Kulkarni in view of Takabayashi does not explicitly teach, however Hyung teaches:
a local Joint Probability Data Association Filter to the sensor measurements to associate the two or more instances of the sensor measurements for the object with one another (The calculating of the probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time may include calculating the probability that the object is located at the second time after t seconds on the basis of the obtained position and shape of the object using a Joint Probability Data Association Filter (JPDAF) [0014]);
apply, for the observation, a local Kalman filter update operation to an output of a local Joint Probability Data Association Filter (The JPDAF includes a Kalman filter algorithm [0015]); and
apply, for the observation, a local Kalman filter prediction operation to an output of the local Kalman filter update operation to fuse the two or more instances of the sensor measurements for the object into the local fused sensor measurement using the observation covariance matrix (In addition, JPDAF basically includes the Kalman filter algorithm. In the prediction process, each point has a mathematical model, and an integration update is applied to the mathematical model, such that it is possible to predict where the object is present at a second time. In the above-mentioned process, an error covariance value of the position prediction value is also calculated according to precision of each model. In other words, in the case of the correct model, the predicted error covariance becomes lower. In the case of the incorrect model, the predicted error covariance becomes higher. [0068-0069]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni and Takabayashi to include the teachings as taught by Hyung with a reasonable expectation of success. Hyung teaches the benefit of “a method for planning a path of a robot includes generating a depth map including a plurality of cells by measuring a distance to an object, dividing a boundary among the plurality of cells into a plurality of partitions according to individual depth values of the cells, and extracting a single closed loop formed by the divided boundary, obtaining a position and shape of the object located at a first time through the extracted single closed loop, calculating a probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time, and creating a moving path simultaneously while avoiding the object according to the calculated probability. [Hyung, 0011]”.
Regarding Claim 8:
Kulkarni in view of Takabayashi teaches all the limitations of claim 7, upon which this claims is dependent.
Kulkarni in view of Takabayashi does not explicitly teach, however Hyung teaches:
apply, for an observation of a plurality of observations of the plurality of connected devices, a global Joint Probability Data Association Filter to the local fused sensor measurements to associate the two or more instances of the locally-fused observation data for the object with one another (The calculating of the probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time may include calculating the probability that the object is located at the second time after t seconds on the basis of the obtained position and shape of the object using a Joint Probability Data Association Filter (JPDAF) [0014]);
apply, for the observation, a global Kalman filter update operation to an output of the global Joint Probability Data Association Filter (The JPDAF includes a Kalman filter algorithm [0015]); and
apply, for the observation, a global Kalman filter prediction operation to an output of the global Kalman filter update operation to fuse the two or more instances of the observation data for the object into the local fused sensor measurement using the localization covariance matrix (In addition, JPDAF basically includes the Kalman filter algorithm. In the prediction process, each point has a mathematical model, and an integration update is applied to the mathematical model, such that it is possible to predict where the object is present at a second time. In the above-mentioned process, an error covariance value of the position prediction value is also calculated according to precision of each model. In other words, in the case of the correct model, the predicted error covariance becomes lower. In the case of the incorrect model, the predicted error covariance becomes higher. [0068-0069]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni and Takabayashi to include the teachings as taught by Hyung with a reasonable expectation of success. Hyung teaches the benefit of “a method for planning a path of a robot includes generating a depth map including a plurality of cells by measuring a distance to an object, dividing a boundary among the plurality of cells into a plurality of partitions according to individual depth values of the cells, and extracting a single closed loop formed by the divided boundary, obtaining a position and shape of the object located at a first time through the extracted single closed loop, calculating a probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time, and creating a moving path simultaneously while avoiding the object according to the calculated probability. [Hyung, 0011]”.
Regarding Claim 12:
Kulkarni in view of Takabayashi teaches all the limitations of claim 11, upon which this claims is dependent.
Kulkarni further teaches:
applying, for an observation of a plurality of observations of the plurality of sensors onboard the connected device (The position estimation system 200 comprises sensors 205 including one or more cameras 210, an inertial measurement unit (IMU) 220, a GNSS unit 230, and radar 235 [0031]),
Kulkarni in view of Takabayashi does not explicitly teach, however Hyung teaches:
a local Joint Probability Data Association Filter to the sensor measurements to associate the two or more instances of the sensor measurements for the object with one another (The calculating of the probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time may include calculating the probability that the object is located at the second time after t seconds on the basis of the obtained position and shape of the object using a Joint Probability Data Association Filter (JPDAF) [0014]);
applying, for the observation, a local Kalman filter update operation to an output of a local Joint Probability Data Association Filter (The JPDAF includes a Kalman filter algorithm [0015]); and
applying, for the observation, a local Kalman filter prediction operation to an output of the local Kalman filter update operation to fuse the two or more instances of the sensor measurements for the object into the local fused sensor measurement using the observation covariance matrix (In addition, JPDAF basically includes the Kalman filter algorithm. In the prediction process, each point has a mathematical model, and an integration update is applied to the mathematical model, such that it is possible to predict where the object is present at a second time. In the above-mentioned process, an error covariance value of the position prediction value is also calculated according to precision of each model. In other words, in the case of the correct model, the predicted error covariance becomes lower. In the case of the incorrect model, the predicted error covariance becomes higher. [0068-0069]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni and Takabayashi to include the teachings as taught by Hyung with a reasonable expectation of success. Hyung teaches the benefit of “a method for planning a path of a robot includes generating a depth map including a plurality of cells by measuring a distance to an object, dividing a boundary among the plurality of cells into a plurality of partitions according to individual depth values of the cells, and extracting a single closed loop formed by the divided boundary, obtaining a position and shape of the object located at a first time through the extracted single closed loop, calculating a probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time, and creating a moving path simultaneously while avoiding the object according to the calculated probability. [Hyung, 0011]”.
Regarding Claim 14:
Kulkarni in view of Takabayashi teaches all the limitations of claim 13, upon which this claims is dependent.
Kulkarni in view of Takabayashi does not explicitly teach, however Hyung teaches:
apply, for an observation of a plurality of observations of the plurality of connected devices, a global Joint Probability Data Association Filter to the local fused sensor measurements to associate the two or more instances of the locally-fused observation data for the object with one another (The calculating of the probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time may include calculating the probability that the object is located at the second time after t seconds on the basis of the obtained position and shape of the object using a Joint Probability Data Association Filter (JPDAF) [0014]);
apply, for the observation, a global Kalman filter update operation to an output of the global Joint Probability Data Association Filter (The JPDAF includes a Kalman filter algorithm [0015]); and
apply, for the observation, a global Kalman filter prediction operation to an output of the global Kalman filter update operation to fuse the two or more instances of the observation data for the object into the local fused sensor measurement using the localization covariance matrix (In addition, JPDAF basically includes the Kalman filter algorithm. In the prediction process, each point has a mathematical model, and an integration update is applied to the mathematical model, such that it is possible to predict where the object is present at a second time. In the above-mentioned process, an error covariance value of the position prediction value is also calculated according to precision of each model. In other words, in the case of the correct model, the predicted error covariance becomes lower. In the case of the incorrect model, the predicted error covariance becomes higher. [0068-0069]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Kulkarni and Takabayashi to include the teachings as taught by Hyung with a reasonable expectation of success. Hyung teaches the benefit of “a method for planning a path of a robot includes generating a depth map including a plurality of cells by measuring a distance to an object, dividing a boundary among the plurality of cells into a plurality of partitions according to individual depth values of the cells, and extracting a single closed loop formed by the divided boundary, obtaining a position and shape of the object located at a first time through the extracted single closed loop, calculating a probability that the object is located at a second time after t seconds on the basis of the obtained position and shape of the object located at the first time, and creating a moving path simultaneously while avoiding the object according to the calculated probability. [Hyung, 0011]”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lin (US 6,240,367) discloses A full fusion positioning method, which can be implemented in the existing hardware, but is more amenable to the emerging wafer-scale integration hardware, comprises the steps of injecting a global positioning system signal received by a global positioning system antenna and a predicted pseudorange and delta range from a data fusion, and converting and tracking said global positioning system signal to obtain pseudorange and delta range measurement and errors of said pseudorange and delta range measurement, which are passed to said data fusion; receiving a vehicle angular rate and an acceleration signal/data from an inertial measurement unit and solving inertial navigation equations for obtaining a referencing navigation solution, including position, velocity, and attitude, which are passed to a data fusion; and fusing said pseudorange and delta range measurement and said errors of said pseudorange and delta range measurement of said global positioning system and said referencing navigation solution to obtain predicted pseudorange and delta range, optimal estimates of said referencing navigation solution errors and inertial sensor errors, and optimal position information.
Brommer (US 2022/0146264) discloses A computer-implemented method is provided for estimating state variables of a moving object, which includes: propagating core state variables of the moving object utilizing a recursive Bayesian filter and observation values from sensors from start-up of the moving object; forming, utilizing observation values from one or more additional sensors added after start-up, a covariance matrix of the recursive Bayesian filter; updating the covariance matrix based on observation values formed by at least one additional sensor; and, ascertaining the covariance of the core state variables of the additional sensor at a time after start-up.
Zhang (US 2021/0101606) discloses Respective planned reference velocities of a reference vehicle are received for each of a plurality of time steps including a current time step. Respective sensed velocities of a subject vehicle for each of the time steps are determined from sensor data. Respective distances between the reference vehicle and the subject vehicle are determined for each of the plurality of time steps. A number of intervening vehicles between the reference vehicle and the subject vehicle is determined. Based on the planned reference velocities of the reference vehicle, the sensed velocities of the subject vehicle, the distance, and the number of intervening vehicles, a future velocity of the subject vehicle is predicted at a time step that is after the current time step.
Rangesh (US 2021/0056713) discloses A surround multi-object tracking and surround vehicle motion prediction framework is provided. A full-surround camera array and LiDAR sensor based approach provides for multi-object tracking for autonomous vehicles. The multi-object tracking incorporates a fusion scheme to handle object proposals from the different sensors within the calibrated camera array. A motion prediction framework leverages the instantaneous motion of vehicles, an understanding of motion patterns of freeway traffic, and the effect of inter-vehicle interactions. The motion prediction framework incorporates probabilistic modeling of surround vehicle trajectories. Additionally, subcategorizing trajectories based on maneuver classes leads to better modeling of motion patterns. A model takes into account interactions between surround vehicles for simultaneously predicting each of their motion.
Bacchus (US 2020/0167934) discloses Methods and systems for enhanced object tracking by receiving sensor fusion data related to target objects and object tracks; determining splines representing trajectories of each target object; filtering the sensor fusion data about each target object based on a first, second and third filtering model wherein each filtering model corresponds to one or more of a set of hypotheses used for processing vectors related to trajectories of a track object wherein the set of hypotheses comprise: a path constraint, a path unconstrained, and a stationary hypothesis; and generating a hypothesis probability for determining whether to use a particular hypothesis based wherein the hypothesis probability is determined based on results from the first, second and third filtering models and from results from classifying, by at least one classification model, one or more features related to the object track for the target object.
Mayer (US 2014/0032167) discloses A system with multiple sensors is managed to determine which sensors to utilize when forming an estimate of the system state. A list of active sensor subsets is formed from multiple sensors. The list of active sensor subsets is represented by a list of differing vectors with indices enumerating the sensors of each sensor subset. Noise is filtered from a measurement of each sensor. State and covariance for each sensor of the multiple sensors is estimated based on prior measurements. A quality of service (QoS) metric is calculated for each sensor subset based on the estimated sensor state. The QoS metric is recorded in a QoS vector and the list of active sensors subsets is updated with the sensor subsets that have a QoS metric above a QoS threshold. The state and covariance estimates are combined to form the estimates of the system state and covariance.
Karlsson (US 2005/0182518) discloses methods and apparatus that permit the measurements from a plurality of sensors to be combined or fused in a robust manner. For example, the sensors can correspond to sensors used by a mobile, device, such as a robot, for localization and/or mapping. The measurements can be fused for estimation of a measurement, such as an estimation of a pose of a robot.
Zeng (US 8,229,663) discloses A vehicle awareness system for monitoring remote vehicles relative to a host vehicle. The vehicle awareness system includes at least one object sensing device and a vehicle-to-vehicle communication device. A data collection module is provided for obtaining a sensor object data map and vehicle-to-vehicle object data map. A fusion module merges the sensor object data map and vehicle-to-vehicle object data map for generating a cumulative object data map. A tracking module estimates the relative position of the remote vehicles to the host vehicle.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott R Jagolinzer whose telephone number is (571)272-4180. The examiner can normally be reached M-Th 8AM - 4PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571)272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Scott R. Jagolinzer
Examiner
Art Unit 3665
/S.R.J./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665