DETAILED ACTION
The following NON-FINAL office action is in response to application 18/352477 filed on 07/14/2023. This communication is the first action on the merits.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-20 are currently pending and have been rejected as follows.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 7/14/23 and 3/19/25 comply with the provisions of 37 CFR 1.97 and are being considered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. The claim is:
The “storage medium” in Claims 13-16. Enough corresponding structure is disclosed in the instant specification for one having ordinary skill in the art to understand that this “storage medium” corresponds to “any available storage media (or computer-readable medium) that can be accessed by a general-use computer-readable storage hardware.
Because this claim limitation is being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
"[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention, i.e., as of the effective filing date of the patent application." Phillips v. AWH Corp.,415 F.3d 1303, 1313, 75 USPQ2d 1321, 1326 (Fed. Cir. 2005) (en banc); Sunrace Roots Enter. Co. v. SRAM Corp., 336 F.3d 1298, 1302, 67 USPQ2d 1438, 1441 (Fed. Cir. 2003); Brookhill-Wilk 1, LLC v. Intuitive Surgical, Inc., 334 F.3d 1294, 1298, 67 USPQ2d 1132, 1136 (Fed. Cir. 2003) ("In the absence of an express intent to impart a novel meaning to the claim terms, the words are presumed to take on the ordinary and customary meanings attributed to them by those of ordinary skill in the art.").
From Merriam-Webster:
Passband – a band of frequencies (as in a radio circuit or a light filter) that is transmitted with maximum efficiency
Claim 2 recites “a pass band of approximately 1Hz,” which is unclear in scope because 1 Hz is a single frequency, not a range of frequencies. The Applicant may have intended to recite “a pass band centered at approximately 1 Hz.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. A subject matter eligibility analysis is set forth below. See MPEP 2106.
Specifically, Claim 1 recites:
A method of dynamic, real-time generation of a blended output from a plurality of sensors, the method comprising:
at a frame rate, periodically storing samples of outputs of the plurality of sensors as stored samples;
filtering the stored samples separately for each of the plurality of sensors with a bandpass filter over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples;
storing the filtered samples;
at an accumulation rate, iteratively updating a covariance matrix based on the filtered samples until a selected number of filtered samples have been processed,
removing data from the covariance matrix for any of the plurality of sensors that have failed;
and calculating, based on the covariance matrix, changes to real-time coefficients to be applied to the outputs of each sensor of the plurality of sensors;
and at the frame rate, applying the changes to the real-time coefficients;
and calculating the blended output for the plurality of sensors based on the real-time coefficients.
The claim limitations in the abstract idea have been underlined above; the remaining limitations are “additional elements.”
Step 1:
Under Step 1 of the analysis, Claim 1 belongs to a statutory category, namely it is a method claim.
Step 2A – Prong 1:
This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
In the instant case, Claim 1 is found to recite at least one judicial exception (i.e. abstract idea), that being a mental process and mathematical calculation. This can be seen in the following claim limitations: “filtering the stored samples separately for each of the plurality of sensors with a bandpass filter over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples,” “at an accumulation rate, iteratively updating a covariance matrix based on the filtered samples until a selected number of filtered samples have been processed,” “removing data from the covariance matrix for any of the plurality of sensors that have failed,” “calculating, based on the covariance matrix, changes to real-time coefficients to be applied to the outputs of each sensor of the plurality of sensors,” “at the frame rate, applying the changes to the real-time coefficients,” and “calculating the blended output for the plurality of sensors based on the real-time coefficients,” which are both mathematical calculations and mental processes. They are merely data observations, evaluations, and manipulations performed to reduce error in a blended data output from multiple sources. Additionally, filtering by applying a bandpass filter [See Specification, Paragraph [0036] and mathematical function in Fig. [2] block 204], updating and removing data from a covariance matrix [See Specification Paragraph [0024] and Eq. [15] which demonstrates the substitution performed in the covariance matrix], calculating and applying changes to the coefficients [See Specification Fig. [3] block 320, 322, Paragraph [0047]; also, [See Specification Fig. [2] block 208, Paragraph [0047]], and calculating the blended output [See Specification Fig. [2] block 26, Paragraph [0038]] from the sensors are all mathematical calculations to reduce error in the blended output array. Although the claim makes mention of a plurality of sensors, their sole function in the claimed method is to output data, with no explanation of how or what data is supplied, thus they cannot be considered as “additional elements” and are within the abstract idea.
Step 2A – Prong 2:
Step 2A, prong 2 of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception(s) into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application.
In addition to the abstract ideas recited in Claim 1, the claimed method recites additional elements including: “real-time generation of a blended output from a plurality of sensors,” “at a frame rate, periodically storing samples of outputs of the plurality of sensors as stored samples” and “storing the filtered samples.” No specific practical application is associated with the claimed method. For example, the generation of the blended output is the data product output by the blending algorithm while the stored samples are merely filtered then stored again as filtered samples before being used in the previously addressed calculations and as such, are not used outside of the identified judicial exceptions.
Thus, under Step 2A, prong 2 of the analysis, even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application and the claim is directed to the judicial exception.
Step 2B:
Under Step 2B, Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, as described above with respect to Step 2A Prong 2, are found to be merely data gathering and output steps recited at a high level of generality, thus amounting to “insignificant extra-solution” activities. See MPEP 2106.05(g) “Insignificant Extra-Solution Activity.”. Such insignificant extra-solution activity, e.g. data gathering and output, when re-evaluated under Step 2B is further found to be well-understood, routine, and conventional as evidenced by MPEP 2106.05(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, and electronically scanning or extracting data from a physical document).
Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that Claim 1 amounts to significantly more than the abstract idea.
With regards to the dependent claims, Claims 2-11 merely further expand upon the abstract idea and do not set forth further additional elements that integrate the recited abstract idea into a practical application or amount to significantly more. Therefore, these claims are found ineligible for the reasons described for parent claim 1. Specifically:
Claims 2-5 further recite “filtering the stored samples comprises filtering the stored samples with the bandpass filter with a pass band of approximately 1 Hz,” “filtering the stored samples comprises filtering the stored samples with the bandpass filter with a pass band selected based on the time scale also associated with a characteristic of the plurality of sensors,” “the pass band is selected based on the time scale associated with random walk,” and “wherein iteratively updating comprises iteratively updating until 2n+1 filtered samples have been processed, wherein n is a number of sensors in the plurality of sensors,” respectively, with no additional elements. Similar to parent Claim 1, “filtering the stored samples” via a bandpass filter of a specified width or of a width ”selected based on the time scale associated with random walk” is both a mental process and mathematical calculation that combines data evaluation and selection with a calculation derived from a stochastic model. The “iteratively updating” claim merely specifies the cadence at which the covariance matrix is to be updated and is part of the mental process and mathematical calculation discussed with regard to Claim 1. Thus, Claims 2-5 do not provide any “additional elements” that would render the claims into practical application, or amount to more than the judicial exception of Claim 1.
Claims 6 and 7 further recite “removing sensors comprises periodically testing each of the plurality of sensors” and “when a sensor has failed, setting a diagonal associated with the sensor in the covariance matrix to a high number compared to other values in the covariance matrix and setting off-diagonal terms to zero,” respectively, with no additional elements. Both of these elements expand on the “removing data from the covariance matrix…” element in Claim 1 by reciting some test for sensor failure and, in the case of (a) failed sensor(s), by setting off-diagonal (of the failed sensor) terms in the covariance matrix to zero. As the test for the failed sensor is unspecified and the sensor itself is within the abstract idea for this method claim, the built-in test is assumed to rely on data observation, manipulation, and judgements and does not set forth any additional element to render the mental process of the corresponding element in Claim 1 into practical application. Similarly, the manipulation of the covariance matrix is part of the mathematical derivation described in the instant specification [see Paragraphs [0015]-[0024], specifically Eq. [15]] and does no more than expand upon the abstract idea of parent claim 6. Note that the removal of sensors in Claim 6 is not referring to the physical removal of a tangible sensor, but to the removal of the data associated with the failed sensor and is still considered as a mathematical calculation and mental process. Thus, Claims 6 and 7 do not provide any “additional elements” that would render the claims into practical application, or amount to more than the judicial exception of Claim 1.
Claims 8 and 9 further recite “calculating the blended output for the plurality of sensors comprises: calculating a first output using the real-time coefficients,” “calculating a second output using calibration coefficients,” “blending the first output with the second output to provide a blended output for the plurality of sensors,” “blending the first output with the second output comprises: applying a high pass filter to the first output,” “applying a low pass filter to the second output,” and “combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of sensors” with no additional elements. The calculation of the virtual sensor output from the real-time and calibration coefficients and blending of the outputs in Claim 8 merely expands on the calculation of changes to real-time coefficients recited in Claim 1 [See derivation in Paragraphs [0015]-[0024]] and thus does no more than expand upon the abstract idea of Claim 1. Similarly, the application of the high pass and low pass filters in Claim 9 only further specify the “blending” recited in parent Claim 8. Thus, Claims 8 and 9 do not provide any “additional elements” that would render the claims into practical application, or amount to more than the judicial exception of Claim 1.
Claims 10-12 further recite “after applying the changes to the real-time coefficients, renormalizing the real-time coefficients so that a sum of the real-time coefficients equals one,” “calculating changes to the real-time coefficients comprises: calculating a difference between a new set of real-time coefficients and a prior set of real-time coefficients,” “multiplying the difference by a scalar, α, to produce a set of changes to the real-time coefficients,” and “the scalar, α, is selected such that: α<< (update rate/frame rate),” with no additional elements. The renormalization recited in Claim 10 is a mathematical calculation performed to process the real-time coefficients prior to applying them to the virtual sensor output. Claim 11 recites calculating a difference, then multiplying that difference by a scalar, which is a mathematical calculation and merely further specifies the mathematical calculation recited in the calculating changes to real-time coefficients element recited in Claim 1. Claim 12 merely specifies a quantifiable requirement for the scalar recited in parent Claim 11, making it a mathematical calculation and mental process as well. Thus, Claims 10-12 do not provide any “additional elements” that would render the claims into practical application, or amount to more than the judicial exception of Claim 1.
Claim 17 recites:
A program product comprising a non-transitory computer-readable medium on which program instructions configured to be executed by at least one processor are embodied, wherein when executed by the at least one processor, the program instructions cause the at least one processor to perform a method comprising:
at a frame rate, periodically storing samples of outputs of a plurality of sensors to produce stored samples;
filtering the stored samples separately for each of the plurality of sensors with a bandpass filter over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples;
storing the filtered samples;
at an accumulation rate, iteratively updating a covariance matrix based on the filtered samples until a selected number of filtered samples have been processed,
removing data from the covariance matrix for any of the plurality of sensors that have failed;
and calculating, based on the covariance matrix, changes to real-time coefficients to be applied to the outputs of each sensor of the plurality of sensors;
and at the frame rate, applying the changes to the real-time coefficients;
and calculating a blended output for the plurality of sensors based on the real-time coefficients.
Step 1:
Under Step 1 of the analysis, Claim 17 belongs to a statutory category, namely it is a product and method claim.
Step 2A – Prong 1:
This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
In the instant case, Claim 13 is found to recite at least one judicial exception that being an abstract idea. This can be seen in the following claim limitations: “filtering the stored samples separately for each of the plurality of sensors with a bandpass filter over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples,” “at an accumulation rate, iteratively updating a covariance matrix based on the filtered samples until a selected number of filtered samples have been processed,” “removing data from the covariance matrix for any of the plurality of sensors that have failed,” “calculating, based on the covariance matrix, changes to real-time coefficients to be applied to the outputs of each sensor of the plurality of sensors,” “at the frame rate, applying the changes to the real-time coefficients,” and “calculating a blended output for the plurality of sensors based on the real-time,” The calibration and real-time coefficients are utilized in calculations recited in this claim, namely the blending of the MEMS sensor output [See Specification Fig. [2] block 26, Paragraph [0038]], and thus the first three limitations recite both mental processes and mathematical calculations. Similarly, applying the bandpass filter to filter the sensor output requires mathematical calculations and data judgements/observations/manipulations [See Specification, Paragraph [0036] and mathematical function in Fig. [2] block 204] that are capable of being performed mentally or with pen and paper rendering it a mental process and mathematical calculation. Updating the covariance matrix based on the filtered samples [See Specification Paragraph [0024] and Eq. [15] which demonstrates the substitution performed in the covariance matrix] and calculating changes to the real-time coefficients [See Specification Fig. [3] block 320, 322, Paragraph [0047]], then applying the changes to the real-time coefficients [See Specification Fig. [2] block 208, Paragraph [0047]], and finally calculating the output from the coefficients [See Specification Fig. [2] block 26, Paragraph [0038]] are all mental processes and mathematical calculations as well. They are merely data observations, evaluations, and manipulations performed to reduce error in a blended data output from multiple sources.
Step 2A – Prong 2:
Step 2A, prong 2 of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception(s) into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application.
In addition to the abstract ideas recited in Claim 17, the claimed method and product recites the following additional elements: “A program product comprising a non-transitory computer-readable medium on which program instructions configured to be executed by at least one processor are embodied, wherein when executed by the at least one processor, the program instructions cause the at least one processor to perform a method,” “at a frame rate, periodically storing samples of outputs of a plurality of sensors to produce stored samples,” and “storing the filtered samples.” The claimed product does no more than apply the judicial exception on a generic processor. Additionally, the stored samples are merely filtered then stored again as filtered samples before being used in the previously addressed calculations and as such, are not used outside of the identified judicial exceptions.
Thus, under Step 2A, prong 2 of the analysis, even when viewed in combination, these additional elements do not integrate the recited judicial exceptions into a practical application and the claim is directed to the judicial exception.
Step 2B:
Under Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, as described above with respect to Step 2A Prong 2, merely amount to a general-purpose computer system that attempts to apply the abstract idea in a technological environment as well as insignificant extra-solution activities. The “program product comprising…” limitation is found to be a program reciting instructions to implement the judicial exceptions of the claim on a generic computer hardware element (i.e., the non-transitory computer-readable medium and at least one processor), are performed “by a computer” however this is found to be equivalent to adding the words “apply it” and mere instructions to apply a judicial exception on a general purpose computer does not integrate the abstract idea into a practical application. See MPEP 2106.05(f). The “…storing samples…” and “…storing the filtered samples” limitations are found to be merely data gathering and output steps recited at a high level of generality, thus amounting to “insignificant extra-solution” activities. See MPEP 2106.05(g) “Insignificant Extra-Solution Activity.”
This can also be viewed as nothing more than an attempt to generally link the use of the judicial exceptions to the technological environment of a computer. Noting MPEP 2106.04(d)(I): “It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2A Prong Two. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception does not guarantee eligibility. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014) ("The fact that a computer ‘necessarily exist[s] in the physical, rather than purely conceptual, realm,’ is beside the point")”. Such insignificant extra-solution activity, e.g. data gathering and output, when re-evaluated under Step 2B is further found to be well-understood, routine, and conventional as evidenced by MPEP 2106.05(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, and electronically scanning or extracting data from a physical document).
Therefore, the above identified additional elements, when analyzed under Step 2B, also fail to necessitate a conclusion that Claim 17, amounts to significantly more than the abstract idea.
With regards to the dependent claims, Claims 18-20 merely further expand upon the abstract idea and do not set forth further additional elements that integrate the recited abstract idea into a practical application or amount to significantly more. Therefore, these claims are found ineligible for the reasons described for parent claim 1. Specifically:
Claims 18-20 further recite “calculating the blended output for the plurality of sensors comprises: calculating a first output using the real-time coefficients,” “calculating a second output using calibration coefficients,” “blending the first output with the second output to provide a blended output for the plurality of sensors,” “blending the first output with the second output comprises: applying a high pass filter to the first output,” “applying a low pass filter to the second output,” “combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of sensors,” “calculating changes to the real-time coefficients comprises: calculating a difference between a new set of real-time coefficients and a prior set of real-time coefficients,” and “multiplying the difference by a scalar, α, to produce a set of changes to the real-time coefficients,” with no additional elements beyond the program product and storing limitations recited in Claim 17. All of these elements recite abstract ideas, and are both mental processes and mathematical calculations (i.e., calculating coefficients, calculating blended output, applying high and low pass filter, calculating change in coefficients, multiplying by a scalar). As in Claim 17, the program product in Claims 18-20 does not render any of the judicial exceptions into practical application, nor do any of the claims in combination with the additional elements amount to more than the judicial exceptions.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-8, 11, 12, 17, 18, and 20 are rejected under 35 U.S.C. 103 under Glevarec et. al. (US 20200088521 A1) in view of Adams et. al. (US 11897486 B1), in further view of Nakaoka et. al. (US 20230202486 A1).
Regarding Claim 1, Glevarec discloses a method of dynamic, real-time generation of a blended output from a plurality of sensors [Paragraph [0025]-[0027] – “…the present invention proposes a positioning system comprising: several inertial measurement units, each inertial measurement unit comprising at least one inertial sensor, accelerometer or gyrometer, configured to provide an inertial signal representative of an acceleration or an angular speed of rotation of the inertial measurement unit, at least one common sensor, configured to provide a measurement of a positioning parameter of the system,” – inertial signal is real-time; Paragraph [0068] – “the inertial measurement units are N in number, and the fusion module is configured to: determine said mean estimate of the positioning parameter of the system by calculating the mean of a set of k corrected estimates of said positioning parameter…” - blended output].
Glevarec does not disclose at a frame rate, periodically storing samples of outputs of the plurality of sensors as stored samples, filtering the stored samples separately for each of the plurality of sensors with a bandpass filter over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples, or storing the filtered samples
However, Adams discloses at a frame rate, periodically storing samples of outputs of the plurality of sensors as stored samples [Col. 5, Ln. 64- Col. 6, Ln. 7 – “Examples of the present disclosure may include other processing, as well. For example, frequencies between the IMUs 122, 124, and 126 may be different…That is, the frequencies may be set differently or the same frequency setting (e.g., 200 Hz) may include slight inconsistencies (e.g., IMUs outputting at slightly different rates). As such, the least common denominator among the IMU data 128, 130, and 132 may be selected and/or the data may be sampled (e.g., below the Nyquist rate) to account for differences, such as based on slight time delays.”];
filtering the stored samples separately for each of the plurality of sensors [Col. 6, Ln. 28-36 – “In examples, the filter 148 may output filtered IMU data 154 associated with the IMUs 122, 124, and 126, such as filtered values associated with a gyroscope and/or accelerometer in the x-axis 114, the y-axis 116, and/or the z-axis 118. For instance, the filtered IMU data 154 may include filtered IMU A data that is sampled from the IMU A data 128, filtered IMU B data that is sampled from the IMU B data 130, and filtered IMU C data that is sampled from the IMU C data 132…”] with a bandpass filter [Col. 6, Ln. 9-16 – “In some examples, the present disclosure includes a filter 148 that filters the transformed IMU data 146. The filter 148 may include various types of filters, such as a high-frequency filter 150 to reduce sensor noise in the data and a low-frequency filter 152 to reduce sensor bias in the data. In some examples, the high-frequency filter 150 and the low-frequency filter 152 may be implemented as a bandpass filter.”] over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples [Col. 6, Ln. 43 – Col. 7, Ln. 3 – “Parameters associated with, and implementation of, the filter 148 may be optimized in various manners…In some examples, a delay associated with the time constant is shorter than time duration associated with a downstream component receiving sensor data using the sensor data for a downstream process (e.g., localization, trajectory planning, etc.). As such, the delay is configured to be short enough to detect an error before potentially inaccurate data is communicated to other systems.”];
storing the filtered samples [Col. 14, Ln. 38-39 – “…In addition, the memory 418 may store a sensor consensus monitor 470.”; Col. 15, Ln. 61 – Col. 16, Ln. 2 – “In some examples, a source of information received by the localization component 424 may depend on operations of the sensor consensus monitor 470. That is, the sensor consensus monitor 470 may perform at least some of the operations described with respect to FIGS. 1, 2, and 3, such as receiving sensor data (e.g., IMU data), filtering the sensor data, translating the filtered sensor data to a common frame, and determining consensus among the data (e.g., based on discrepancy).” – consensus monitor is stored on memory and implements the bandpass filter, so filtered data is stored here as well]
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to filter the sensor signals as disclosed by Adams prior to blending the sensor output as disclosed by Glevarec in order to exclude erroneous and inaccurate data.
The combination of Glevarec and Adams discloses at an accumulation rate, iteratively updating a covariance matrix based on the filtered samples [Glevarec, Paragraph [0235] – “During the successive executions of step b1), the navigation filter F.sub.i.sup.k determines step by step, iteratively, the estimated error {circumflex over (x)}.sub.n.sup.k,i at the calculation step n, as well as an estimate P.sub.n.sup.k,i of the covariance matrix of the error vector x.” – see also Fig. [3] step b1; Paragraph [0239]-[0240] – “…a step b12) of updating this error (sometimes called adjustment step), during which the estimated error is adjusted on the basis of the measurements provided by the common sensors C1, . . . , Cp. The navigation filter F.sup.k.sub.i executes the propagation and updating steps at each calculation step.”] until a selected number of filtered samples have been processed [Paragraph [0226] – “…the filter executing steps a) and b) several times successively.”].
The combination of Glevarec and Adams does not disclose removing data from the covariance matrix for any of the plurality of sensors that have failed.
However, Nakaoka discloses removing data from the covariance matrix for any of the plurality of sensors that have failed [Paragraph [0095] – “Returning to FIG. 1, the posture estimation device performs processing (rotational error component removal processing) of removing a rotational error component around a reference vector in the error covariance matrix .Math..sub.x, .sub.k.sup.2 being the error information (rotational error-component removal step S4). The reference vector is a vector observed by the observation section. In the embodiment, the reference vector is a gravitational acceleration vector observed by the acceleration sensor being the observation section. In the embodiment, a rotation error around the reference vector is an azimuth error.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to remove error components from the error covariance matrix in the event of sensor failure, as disclosed by Nakaoka during the update of the error vector and covariance matrix disclosed by the combination of Glevarec and Adams in order to and improve the accuracy of sensor blending output in an IMU.
The combination of Glevarec, Adams, and Nakaoka discloses calculating, based on the covariance matrix, changes to real-time coefficients to be applied to the outputs of each sensor of the plurality of sensors [Glevarec, Paragraph [0045] – “each individual navigation filter is configured to that, during a first execution of step a), the covariance matrix P.sub.n.sup.k,i of the deviation between said estimate of the positioning parameter of the system and this positioning parameter is estimated as a function of an initial covariance matrix, and of a propagation noise matrix…”; [0048]-[0052] – “the positioning system further comprises, for each inertial measurement unit, a conventional Kalman filter configured to: determine an additional estimate of said positioning parameter of the system, on the basis of the inertial signal provided by said inertial measurement unit, estimate an additional covariance matrix P.sub.n.sup.1,i of a deviation between said additional estimate and said positioning parameter of the system, and to determine a corrected additional estimate of said positioning parameter by adding to said previously determined additional estimate a corrective term equal to an additional correction gain multiplied by a difference between, on the one hand, said measurement of the positioning parameter of the system and, on the other hand, the product of the measurement matrix multiplied by said additional estimate or multiplied by a sum of said additional estimate and of an additional estimated error affecting said additional estimate, the additional correction gain being determined as a function of the variance of the measurement noise of the common sensor; the additional correction gain is equal to the following quantity: P.sub.n.sup.1,i H.sub.n.sup.T (S.sup.1.sub.n).sup.−1, where H.sub.n.sup.T is the transposed matrix of the measurement matrix H.sub.n and where (S.sup.1.sub.n).sup.−1 is the inverse of an additional innovation covariance matrix S.sup.1.sub.n equal to H.sub.n P.sub.n.sup.1,i H.sub.n.sup.T+R.sub.n, R.sub.n being said variance of said measurement noise” – correction gain is the real-time coefficient calculated from covariance matrix and additional correction gain is the change in real-time coefficient];
at the frame rate [Glevarec, Paragraph [0093] – “each individual navigation filter executing several times successively the set of steps a) and b) without taking into account said mean estimate determined by the fusion module”; Paragraph [0281] – “where Δt is a time step (between two successive calculation steps)” – time between calculation steps is time between successive execution of steps a) and b)], applying the changes to the real-time coefficients [Paragraph [0067] – “the respective correction gains of the different navigation filters are equal to a same common correction gain whose value is, at each repetition of steps b), calculated only once for all of said navigation filters”; Paragraph [0091] – “b) determines a corrected estimate of said positioning parameter by adding to said estimate previously determined at step a) a corrective term equal to a correction gain multiplied by a difference between, on the one hand, said measurement of the positioning parameter of the system, and, on the other hand, the product of a measurement matrix multiplied by said estimate of the positioning parameter or multiplied by a sum of said estimate and of an estimated error affecting said estimate…” – additional correction gain is applied];
and calculating the blended output for the plurality of sensors based on the real-time coefficients [Glevarec, Paragraph [0091]-[0092] – “b) determines a corrected estimate of said positioning parameter…and at least one fusion module determines a mean estimate of said positioning parameter of the system by calculating a mean of a given number of said corrected estimates of the positioning parameter, said number being higher than or equal to two and lower than or equal to the number of inertial measurement units that are included in the system”].
Regarding Claim 2, the b mcombination of Glevarec and Adams discloses the method of claim 1, wherein filtering the stored samples comprises filtering the stored samples with the bandpass filter with a pass band of approximately 1 Hz [Adams, Col. 6, Ln. 9-16 – “In some examples, the present disclosure includes a filter 148 that filters the transformed IMU data 146. The filter 148 may include various types of filters, such as a high-frequency filter 150 to reduce sensor noise in the data and a low-frequency filter 152 to reduce sensor bias in the data. In some examples, the high-frequency filter 150 and the low-frequency filter 152 may be implemented as a bandpass filter.”; Col. 6, Ln. 43-54 – “Parameters associated with, and implementation of, the filter 148 may be optimized in various manners...For instance, in some examples, the filter may include an exponential filter with an optimized time constant (e.g., between about 0.5 seconds and about 1.0 seconds).”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply a bandpass filter including a low frequency pass band (such as about 1 Hz), as disclosed by Adams, prior to blending the sensor output as taught by the combination of Glevarec, Adams, and Nakaoka, to effectively filter out high-frequency noise/drift associated with the sensors.
Regarding Claim 3, the combination of Glevarec and Adams discloses method of claim 1, wherein filtering the stored samples comprises filtering the stored samples with the bandpass filter with a pass band selected based on the time scale also associated with a characteristic of the plurality of sensors [Adams, Col. 6, Ln. 9-16 – “In some examples, the high-frequency filter 150 and the low-frequency filter 152 may be implemented as a bandpass filter.”; Col. 6, Ln. 43 – Col. 7, Ln. 3– “Parameters associated with, and implementation of, the filter 148 may be optimized in various manners. For example, time delays may be inserted at various steps to tune outputs, such as by waiting a period of time (e.g., or for a quantity of data values to be received) to receive a sufficiently robust data set. Such time delays may also be introduced to account for differences in starting times for the IMUs, to account for differences in electric path lengths from the IMUs to the receiving computing system, and the like… In some examples, a delay associated with the time constant is shorter than time duration associated with a downstream component receiving sensor data using the sensor data for a downstream process (e.g., localization, trajectory planning, etc.). As such, the delay is configured to be short enough to detect an error before potentially inaccurate data is communicated to other systems.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply a bandpass filter with a pass band based on a time scale associated with a characteristic of the sensors, as disclosed by Adams, prior to blending the sensor output as disclosed by the combination of Glevarec, Adams, and Nakaoka, to effectively filter out noise/drift associated with the sensors.
Regarding Claim 6, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 1, wherein removing sensors comprises periodically testing each of the plurality of sensors [Nakaoka, Paragraph [0096] – “Then, the posture estimation device performs off-scale recovery processing (error information adjustment step S5). Specifically, in the error information adjustment step S5, the posture estimation device determines whether or not the output of the angular velocity sensor is within the effective range. When it is determined that the output of the angular velocity sensor is not within the effective range, the posture estimation device performs processing of increasing the posture error component in the error covariance matrix…”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to test the sensors for failure, as disclosed by Nakaoka, before removing failed sensors, as disclosed by the combination of Glevarec, Adams, and Nakaoka, in order to identify which sensors should be removed.
Regarding Claim 7, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 6.
While the combination does not expressly disclose wherein, when a sensor has failed, setting a diagonal associated with the sensor in the covariance matrix to a high number compared to other values in the covariance matrix and setting off-diagonal terms to zero,
Nakaoka does, however, set up a matrix in such a manner [Nakaoka, Paragraphs [0062] – “Initial values of the state vector × and the error covariance matrix .Math..sub.x.sup.2 are given as in Expression (30).” – See error covariance matrix in Expression [30], where the diagonals are set to a high number and off-diagonals are set to zero].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to set the diagonals of the error covariance matrix to high values and set off-diagonal terms to zero, as demonstrated in the covariance matrix in Nakaoka, to remove failed sensors from the sensor array as disclosed by the combination of Glevarec, Adams, and Nakaoka to improve accuracy of sensor blending in the IMU.
Regarding Claim 8, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 1, wherein calculating the blended output for the plurality of sensors comprises: calculating a first output using the real-time coefficients [Glevarec, Paragraph [0068]-[0069] – “the inertial measurement units are N in number, and the fusion module is configured to: determine said mean estimate of the positioning parameter of the system by calculating the mean of a set of k corrected estimates of said positioning parameter, among the N corrected estimates of this positioning parameter respectively determined by the individual navigation filters, the integer number k being lower than or equal to N”];
calculating a second output [Glevarec, Paragraph [0070] – “determine at least another mean estimate of the positioning parameter of the system, by calculating the mean of another set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter respectively determined by the individual navigation filters.”] using calibration coefficients [Paragraph [0076], [0080], [0083] – “each navigation filter is configured to determine an estimate of a state vector of the system, one of the components of this state vector being said positioning parameter, another component comprising one of the following magnitudes…a calibration residue parameter of one of the inertial measurement units…a calibration parameter of said common sensor…”];
and blending the first output with the second output to provide a blended output for the plurality of sensors [Glevarec, Paragraph [0071] – “the fusion module is configured so as, for each set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter determined by the individual navigation filters, to determine a mean estimate of the positioning parameter equal to the mean of the k corrected estimates of the positioning parameter included in said set”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate and blend successive outputs as disclosed by Gleravec in the IMU sensor blending method disclosed by Glevarec, Adams, and Nakaoka, to provide a real-time (time varying) output.
Regarding Claim 11, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 1, wherein calculating changes to the real-time coefficients comprises: calculating a difference between a new set of real-time coefficients and a prior set of real-time coefficients [Glevarec, Paragraph [0055] –“determine an expected covariance matrix for said difference, as a function of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.,j of the two additional estimates that have been respectively determined on the basis of the inertial signals provided by said two inertial measurement units, and as a function of the covariance matrices P.sub.n.sup.k,i and P.sub.n.sup.k,j of the two estimates of the positioning parameter that have been respectively determined on the basis of these same inertial signals,”’; Paragraph [0058]-[0059] – “said expected covariance matrix is equal to the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j, minus an estimate of the correlation between said corrected additional estimates, whose difference has been calculated; said estimate of the correlation between said corrected additional estimates is determined as a function of a difference between, on the one hand, the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j determined by the conventional Kalman filters (without augmented variance), and, on the other hand, the sum of the covariance matrices P.sub.n.sup.k,i and P.sub.n.sup.k,j” – see also expected covariance equation following Paragraph [0061]];
and multiplying the difference by a scalar, α, to produce a set of changes to the real-time coefficients [Glevarec, Paragraph [0058]-[0059] – “said expected covariance matrix is equal to the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j, minus an estimate of the correlation between said corrected additional estimates, whose difference has been calculated; said estimate of the correlation between said corrected additional estimates is determined as a function of a difference between, on the one hand, the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j determined by the conventional Kalman filters (without augmented variance), and, on the other hand, the sum of the covariance matrices P.sub.n.sup.k,i and P.sub.n.sup.k,j” – see also expected covariance equation following Paragraph [0061] and note the 1/(k-1) scalar, which is, in this case the fusion ratio].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate the changes to the fusion coefficients and the overall blended output, as disclosed by Glevarec, Adams, and Nakaoka, in order to improve the accuracy of sensor blending output in an IMU.
Regarding Claim 12, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 11, wherein the scalar, α, is selected such that: α<< (update rate/frame rate) [Glevarec, Paragraph [0061], [0069] – “…where k is the number of estimates of the positioning parameter whose mean is calculated by the fusion module to determine said mean estimate” – 1/k is the fusion ratio, or, the number of estimates in each update; Paragraph [0068]-[0069] – “the inertial measurement units are N in number…the integer number k being lower than or equal to N” – already known that k rate is some fraction of the update rate, assuming a frame rate on order of 1 Hz, alpha must be much less than the ratio].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select a fusion ratio that allows for a reasonable number of estimates before updating the covariance matrix, as disclosed by Glevarec in the IMU fusion calculation disclosed by the combination of Glevarec, Adams, and Nakaoka.
Regarding Claim 17, Glevarec discloses a program product comprising a non-transitory computer-readable medium on which program instructions [Paragraph [0163] – “Each module of the processing unit 10 may be made by means of a set of dedicated electronic components and/or by means of a set of instructions stored in the memory (or in one of the memories) of the processing unit 10…”] configured to be executed by at least one processor are embodied [Paragraph [0161] – “The electronic processing unit 10 comprises at least a processor, a memory, inputs for acquiring the signals provided by the inertial measurement units, and inputs for acquiring the measurements provided by the common sensors.”], wherein when executed by the at least one processor, the program instructions cause the at least one processor to perform a method [Paragraph [0163] – “…The processing unit 10 may be made as an electronic unit which is distinct from the inertial measurement units and external to these latter. It may also be made as several electronic circuits, or comprise several groups of instructions, certain of which may be integrated to the inertial measurement units themselves.”]
Glevarec does not disclose at a frame rate, periodically storing samples of outputs of a plurality of sensors to produce stored samples, filtering the stored samples separately for each of the plurality of sensors with a bandpass filter over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples, or storing the filtered samples.
However, Adams discloses at a frame rate, periodically storing samples of outputs of a plurality of sensors to produce stored samples [Col. 5, Ln. 64- Col. 6, Ln. 7 – “Examples of the present disclosure may include other processing, as well. For example, frequencies between the IMUs 122, 124, and 126 may be different, and as such, additional processing may be performed to increase the likelihood that the data being compared is associated with a same time. That is, the frequencies may be set differently or the same frequency setting (e.g., 200 Hz) may include slight inconsistencies (e.g., IMUs outputting at slightly different rates). As such, the least common denominator among the IMU data 128, 130, and 132 may be selected and/or the data may be sampled (e.g., below the Nyquist rate) to account for differences, such as based on slight time delays.”];
filtering the stored samples separately for each of the plurality of sensors [Col. 6, Ln. 28-36 – “In examples, the filter 148 may output filtered IMU data 154 associated with the IMUs 122, 124, and 126, such as filtered values associated with a gyroscope and/or accelerometer in the x-axis 114, the y-axis 116, and/or the z-axis 118. For instance, the filtered IMU data 154 may include filtered IMU A data that is sampled from the IMU A data 128, filtered IMU B data that is sampled from the IMU B data 130, and filtered IMU C data that is sampled from the IMU C data 132…”] with a bandpass filter [Col. 6, Ln. 9-16 – “In some examples, the present disclosure includes a filter 148 that filters the transformed IMU data 146. The filter 148 may include various types of filters, such as a high-frequency filter 150 to reduce sensor noise in the data and a low-frequency filter 152 to reduce sensor bias in the data. In some examples, the high-frequency filter 150 and the low-frequency filter 152 may be implemented as a bandpass filter.”] over a time scale characteristic of a type of error for the plurality of sensors to produce filtered samples [Col. 6, Ln. 43 – Col. 7, Ln. 3 – “Parameters associated with, and implementation of, the filter 148 may be optimized in various manners…In some examples, a delay associated with the time constant is shorter than time duration associated with a downstream component receiving sensor data using the sensor data for a downstream process (e.g., localization, trajectory planning, etc.). As such, the delay is configured to be short enough to detect an error before potentially inaccurate data is communicated to other systems.”];
and storing the filtered samples [Col. 14, Ln. 38-39 – “In addition, the memory 418 may store a sensor consensus monitor 470.”; Col. 15, Ln. 61 – Col. 16, Ln. 2 – “In some examples, a source of information received by the localization component 424 may depend on operations of the sensor consensus monitor 470. That is, the sensor consensus monitor 470 may perform at least some of the operations described with respect to FIGS. 1, 2, and 3, such as receiving sensor data (e.g., IMU data), filtering the sensor data, translating the filtered sensor data to a common frame, and determining consensus among the data (e.g., based on discrepancy).” – consensus monitor is stored on memory and implements the bandpass filter, so filtered data must be stored here as well].
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to filter the sensor signals, as disclosed by Adams, prior to blending the sensor ouput, as disclosed by Glevarec in order to filter sensor noise.
The combination of Glevarec and Adams discloses at an accumulation rate, iteratively updating a covariance matrix based on the filtered samples [Glevarec, Paragraph [0235] – “During the successive executions of step b1), the navigation filter F.sub.i.sup.k determines step by step, iteratively, the estimated error {circumflex over (x)}.sub.n.sup.k,i at the calculation step n, as well as an estimate P.sub.n.sup.k,i of the covariance matrix of the error vector x.” – see also Fig. [3] step b1; Paragraph [0239]-[0240] – “…a step b12) of updating this error (sometimes called adjustment step), during which the estimated error is adjusted on the basis of the measurements provided by the common sensors C1, . . . , Cp. The navigation filter F.sup.k.sub.i executes the propagation and updating steps at each calculation step.”] until a selected number of filtered samples have been processed [Paragraph [0226] – “…the filter executing steps a) and b) several times successively.”].
The combination of Glevarec and Adams does not disclose removing data from the covariance matrix for any of the plurality of sensors that have failed.
However, Nakaoka discloses removing data from the covariance matrix for any of the plurality of sensors that have failed [Paragraph [0095] – “Returning to FIG. 1, the posture estimation device performs processing (rotational error component removal processing) of removing a rotational error component around a reference vector in the error covariance matrix .Math..sub.x, .sub.k.sup.2 being the error information (rotational error-component removal step S4). The reference vector is a vector observed by the observation section. In the embodiment, the reference vector is a gravitational acceleration vector observed by the acceleration sensor being the observation section. In the embodiment, a rotation error around the reference vector is an azimuth error.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to remove error components from the error covariance matrix in the event of sensor failure, as disclosed by Nakaoka during the update of the error vector and covariance matrix disclosed by the combination of Glevarec and Adams in order to and improve the accuracy of sensor blending output in an IMU.
The combination of Glevarec, Adams, and Nakaoka discloses calculating, based on the covariance matrix, changes to real-time coefficients to be applied to the outputs of each sensor of the plurality of sensors [Glevarec, Paragraph [0045] – “each individual navigation filter is configured to that, during a first execution of step a), the covariance matrix P.sub.n.sup.k,i of the deviation between said estimate of the positioning parameter of the system and this positioning parameter is estimated as a function of an initial covariance matrix, and of a propagation noise matrix…”; [0048]-[0052] – “the positioning system further comprises, for each inertial measurement unit, a conventional Kalman filter configured to: determine an additional estimate of said positioning parameter of the system, on the basis of the inertial signal provided by said inertial measurement unit, estimate an additional covariance matrix P.sub.n.sup.1,i of a deviation between said additional estimate and said positioning parameter of the system, and to determine a corrected additional estimate of said positioning parameter by adding to said previously determined additional estimate a corrective term equal to an additional correction gain multiplied by a difference between, on the one hand, said measurement of the positioning parameter of the system and, on the other hand, the product of the measurement matrix multiplied by said additional estimate or multiplied by a sum of said additional estimate and of an additional estimated error affecting said additional estimate, the additional correction gain being determined as a function of the variance of the measurement noise of the common sensor; the additional correction gain is equal to the following quantity: P.sub.n.sup.1,i H.sub.n.sup.T (S.sup.1.sub.n).sup.−1, where H.sub.n.sup.T is the transposed matrix of the measurement matrix H.sub.n and where (S.sup.1.sub.n).sup.−1 is the inverse of an additional innovation covariance matrix S.sup.1.sub.n equal to H.sub.n P.sub.n.sup.1,i H.sub.n.sup.T+R.sub.n, R.sub.n being said variance of said measurement noise” – correction gain is the real-time coefficient calculated from covariance matrix and additional correction gain is the change in real-time coefficient];
and at the frame rate [Glevarec, Paragraph [0093] – “each individual navigation filter executing several times successively the set of steps a) and b) without taking into account said mean estimate determined by the fusion module”; Paragraph [0281] – “where Δt is a time step (between two successive calculation steps)” – time between calculation steps is time between successive execution of steps a) and b)], applying the changes to the real-time coefficients [Paragraph [0067] – “the respective correction gains of the different navigation filters are equal to a same common correction gain whose value is, at each repetition of steps b), calculated only once for all of said navigation filters”; Paragraph [0091] – “b) determines a corrected estimate of said positioning parameter by adding to said estimate previously determined at step a) a corrective term equal to a correction gain multiplied by a difference between, on the one hand, said measurement of the positioning parameter of the system, and, on the other hand, the product of a measurement matrix multiplied by said estimate of the positioning parameter or multiplied by a sum of said estimate and of an estimated error affecting said estimate…” – additional correction gain is applied];
and calculating a blended output for the plurality of sensors based on the real-time coefficients [Glevarec, Paragraph [0091]-[0092] –” b) determines a corrected estimate of said positioning parameter…and at least one fusion module determines a mean estimate of said positioning parameter of the system by calculating a mean of a given number of said corrected estimates of the positioning parameter, said number being higher than or equal to two and lower than or equal to the number of inertial measurement units that are included in the system”].
Regarding Claim 18, the combination of Glevarec, Adams, and Nakaoka discloses the program product of claim 17 [Glevarec, Paragraph [0161] – “The electronic processing unit 10 comprises at least a processor, a memory, inputs for acquiring the signals provided by the inertial measurement units, and inputs for acquiring the measurements provided by the common sensors.”; Paragraph [0163] – “Each module of the processing unit 10 may be made by means of a set of dedicated electronic components and/or by means of a set of instructions stored in the memory (or in one of the memories) of the processing unit 10. The processing unit 10 may be made as an electronic unit which is distinct from the inertial measurement units and external to these latter. It may also be made as several electronic circuits, or comprise several groups of instructions, certain of which may be integrated to the inertial measurement units themselves.”], wherein calculating the blended output for the plurality of sensors comprises: calculating a first output using the real-time coefficients [Glevarec, Paragraph [0068]-[0069] – “the inertial measurement units are N in number, and the fusion module is configured to: determine said mean estimate of the positioning parameter of the system by calculating the mean of a set of k corrected estimates of said positioning parameter, among the N corrected estimates of this positioning parameter respectively determined by the individual navigation filters, the integer number k being lower than or equal to N”];
calculating a second output [Glevarec, Paragraph [0070] – “determine at least another mean estimate of the positioning parameter of the system, by calculating the mean of another set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter respectively determined by the individual navigation filters.”] using calibration coefficients [Paragraph [0076], [0080], [0083] – “each navigation filter is configured to determine an estimate of a state vector of the system, one of the components of this state vector being said positioning parameter, another component comprising one of the following magnitudes…a calibration residue parameter of one of the inertial measurement units…a calibration parameter of said common sensor…”];
and blending the first output with the second output to provide a blended output for the plurality of sensors [Glevarec, Paragraph [0071] – “the fusion module is configured so as, for each set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter determined by the individual navigation filters, to determine a mean estimate of the positioning parameter equal to the mean of the k corrected estimates of the positioning parameter included in said set”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate and blend successive outputs as disclosed by Gleravec in the IMU sensor blending method disclosed by Glevarec, Adams, and Nakaoka, to provide a real-time (time varying) output.
Regarding Claim 20, the combination of Glevarec, Adams, and Nakaoka discloses the program product of claim 17, wherein calculating changes to the real-time coefficients comprises: calculating a difference between a new set of real-time coefficients and a prior set of real-time coefficients [Glevarec, [Paragraph [0055] –“determine an expected covariance matrix for said difference, as a function of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.,j of the two additional estimates that have been respectively determined on the basis of the inertial signals provided by said two inertial measurement units, and as a function of the covariance matrices P.sub.n.sup.k,i and P.sub.n.sup.k,j of the two estimates of the positioning parameter that have been respectively determined on the basis of these same inertial signals,”’; Paragraph [0058]-[0059] – “said expected covariance matrix is equal to the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j, minus an estimate of the correlation between said corrected additional estimates, whose difference has been calculated; said estimate of the correlation between said corrected additional estimates is determined as a function of a difference between, on the one hand, the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j determined by the conventional Kalman filters (without augmented variance), and, on the other hand, the sum of the covariance matrices P.sub.n.sup.k,i and P.sub.n.sup.k,j” – see also expected covariance equation following Paragaph [0061]];
and multiplying the difference by a scalar, α, to produce a set of changes to the real-time coefficients [Glevarec, Paragraph [0058]-[0059] – “said expected covariance matrix is equal to the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j, minus an estimate of the correlation between said corrected additional estimates, whose difference has been calculated; said estimate of the correlation between said corrected additional estimates is determined as a function of a difference between, on the one hand, the sum of the additional covariance matrices P.sub.n.sup.1,i and P.sub.n.sup.1,j determined by the conventional Kalman filters (without augmented variance), and, on the other hand, the sum of the covariance matrices P.sub.n.sup.k,i and P.sub.n.sup.k,j” – see also expected covariance equation following Paragaph [0061] and note the 1/(k-1) scalar, which is, in this case the fusion ratio].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate the changes to the fusion coefficients and the overall blended output, as disclosed by Glevarec, Adams, and Nakaoka, in order to improve the accuracy of sensor blending output in an IMU.
Claim 4 is rejected under Glevarec et. al., in view of Adams et. al., in further view of Nakaoka et. al., in further view of Takeda et. al. (US 20210270635 A1).
Regarding Claim 4, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 3 [Adams, Col. 6, Ln. 9-16 – “In some examples, the high-frequency filter 150 and the low-frequency filter 152 may be implemented as a bandpass filter.”; Col. 6 Ln. 0063- Col 7, Ln. 3– “In some examples, a delay associated with the time constant is shorter than time duration associated with a downstream component receiving sensor data using the sensor data for a downstream process (e.g., localization, trajectory planning, etc.). As such, the delay is configured to be short enough to detect an error before potentially inaccurate data is communicated to other systems.”].
The combination of Glevarec, Adams, and Nakaoka does not disclose wherein the pass band is selected based on a time scale associated with random walk.
However, Takeda discloses wherein the pass band is selected based on a time scale associated with random walk [Paragraph [0049]-[0050] – “Moreover, a random walk is predominantly observed when the time window length is near 10.sup.4, and drift (linear drift) that varies with a constant gradient is predominantly observed when the time window length is near 10.sup.6. As described above, with respect to an Allan variance, the type of noise predominantly observed differs depending on the time window length τ. Further, noise becomes dominant as the time window length τ of an Allan variance is reduced, and the bias stability becomes dominant as the time window length τ is increased.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the random walk timescale disclosed by Takeda as the pass band in the bandpass filter disclosed by the combination of Glevarec, Adams, and Nakaoka in order to filter the data on a timescale characteristic of sensor drift and noise.
Claim 5 is rejected under Glevarec et. al., in view of Adams et. al., in view of Nakaoka et. al., in further view of Xu et. al. (US 20210080287 A1).
Regarding Claim 5, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 1.
However, the combination does not disclose iteratively updating comprises iteratively updating until 2n+1 filtered samples have been processed, wherein n is a number of sensors in the plurality of sensors.
However, Xu discloses iteratively updating until 2n+1 filtered samples have been processed, wherein n is a number of sensors in the plurality of sensors [Paragraph [0039] – “System dimension n=15” - the number of sensors is the sensor dimension; Paragraph [0042] – “Calculate 2n+1 σ samples when k-1 (k=1, 2, 3, . . . )”].
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to use the sampling window of 2n + 1 of Xu to iteratively update the covariance matrix as taught in the combination of Glevarec, Adams, and Nakaoka in order to have enough samples to yield a stable covariance matrix.
Claims 9 and 19 are rejected under Glevarec et. al., in view of Adams et. al., in view of Nakaoka et. al., in further view of Foxlin et. al (US 8762091 B1).
Regarding Claim 9, the combination of Glevarec, Adams, and Nakaoka discloses the method of claim 8.
The combination does not disclose that blending the first output with the second output comprises: applying a high pass filter to the first output; applying a low pass filter to the second output; and combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of sensors.
However, Foxlin discloses blending the first output with the second output comprises: applying a high pass filter to the first output [Col. 2, Ln. 17-25 – “For example, in one particular embodiment, two gyroscopes having comparable ranges are fused, but one has significantly lower noise while the other has better bias stability. In this case it is desirable to output the signal from the low-noise "main" gyro while effectively "replacing" or "training" its bias with that of the more stable sensor. This can be accomplished with a complementary filter that passes through the high-frequency content from the low-noise sensor…”];
applying a low pass filter to the second output [Col. 2, Ln. 25-26 – “…and the low-frequency "bias" of the other sensor.”];
and combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of sensors [Col. 2, Ln. 17-20 – “For example, in one particular embodiment, two gyroscopes having comparable ranges are fused, but one has significantly lower noise while the other has better bias stability…This can be accomplished with a complementary filter…”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement a complementary filter as disclosed by Foxlin in order to calculate the blended output, as disclosed by the combination of Glevarec, Adams, and Nakaoka, in order to improve the accuracy of sensor blending output in an IMU.
Regarding Claim 19, the combination of Glevarec, Adams, and Nakaoka discloses the program product of claim 18 [Glevarec, Paragraph [0161] – “The electronic processing unit 10 comprises at least a processor, a memory, inputs for acquiring the signals provided by the inertial measurement units, and inputs for acquiring the measurements provided by the common sensors.”; Paragraph [0163] – “Each module of the processing unit 10 may be made by means of a set of dedicated electronic components and/or by means of a set of instructions stored in the memory (or in one of the memories) of the processing unit 10. The processing unit 10 may be made as an electronic unit which is distinct from the inertial measurement units and external to these latter. It may also be made as several electronic circuits, or comprise several groups of instructions, certain of which may be integrated to the inertial measurement units themselves.”].
The combination does not disclose that blending the first output with the second output comprises: applying a high pass filter to the first output; applying a low pass filter to the second output; and combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of sensors.
However, Foxlin discloses that blending the first output with the second output comprises: applying a high pass filter to the first output [Col. 2, Ln. 17-25 – “For example, in one particular embodiment, two gyroscopes having comparable ranges are fused, but one has significantly lower noise while the other has better bias stability. In this case it is desirable to output the signal from the low-noise "main" gyro while effectively "replacing" or "training" its bias with that of the more stable sensor. This can be accomplished with a complementary filter that passes through the high-frequency content from the low-noise sensor…”];
applying a low pass filter to the second output [Col. 2, Ln. 25-26 – “…and the low-frequency "bias" of the other sensor.”] ;
and combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of sensors [Col. 2, Ln. 17-20 – “For example, in one particular embodiment, two gyroscopes having comparable ranges are fused, but one has significantly lower noise while the other has better bias stability…This can be accomplished with a complementary filter…”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement a complementary filter as disclosed by Foxlin in order to calculate the blended output, as disclosed by the combination of Glevarec, Adams, and Nakaoka, in order to improve the accuracy of sensor blending output in an IMU.
Claim 10 is rejected under Glevarec et. al., in view of Adams et. al., in view of Nakaoka et. al., in further view of McDaniel et. al. (US 20220179102 A1).
Regarding Claim 10, the combination of Glevarec, Adams, and Nakaoka discloses method of claim 1.
The combination does not disclose after applying the changes to the real-time coefficients, renormalizing the real-time coefficients so that a sum of the real-time coefficients equals one.
However, McDaniel discloses after applying the changes to the real-time coefficients, renormalizing the real-time coefficients so that a sum of the real-time coefficients equals one [Paragraph [0129] – “The weights in (21) need to be normalized when used in the importance sampling. The normalization of the weights is calculated by …where N is the number of samples in the particle cloud. The weights are used to estimate the posterior distribution of the estimated position and velocity.” – see also Eq. [00021]; weights (coefficients) are being divided by the sum of the total weights].
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to apply the normalization disclosed in McDaniel to the updated blending coefficients disclosed by the combination of Glevarec, Adams, and Nakaoka, to remove the impact of the removed sensors.
Claims 13 and 15 are rejected under 35 U.S.C. 103 under Glevarec et. al. (US 20200088521 A1) in view of Adams et. al. (US 11897486 B1).
Regarding Claim 13, Glevarec discloses an inertial measurement unit (IMU) [Glevarec, Paragraph [0025]-[0027] – “…the present invention proposes a positioning system comprising: several inertial measurement units, each inertial measurement unit comprising at least one inertial sensor, accelerometer or gyrometer, configured to provide an inertial signal representative of an acceleration or an angular speed of rotation of the inertial measurement unit, at least one common sensor, configured to provide a measurement of a positioning parameter of the system…”].
Glevarec does not disclose an IMU comprising: a plurality of micro-electromechanical system sensors (MEMS sensors), each of the plurality of MEMS sensors having an output.
Adams discloses an IMU comprising: a plurality of micro-electromechanical system sensors (MEMS sensors) [Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]], each of the plurality of MEMS sensors having an output [Col. 10, Ln. 37-44] – “In examples, the data 310 and 312 is time series data (e.g., angular rate, force, or other motion determined in association with sequential times) that is continuously generated and fed to the other components for subsequent processing.”].
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the IMU disclosed by Glevarec be comprised of the MEMS sensors disclosed by Adams in order to reduce weight and cost of the IMU.
The combination of Glevarec and Adams discloses a storage medium [Glevarec, Paragraph [0163] – “Each module of the processing unit 10 may be made by means of a set of dedicated electronic components and/or by means of a set of instructions stored in the memory (or in one of the memories) of the processing unit 10. The processing unit 10 may be made as an electronic unit which is distinct from the inertial measurement units and external to these latter. It may also be made as several electronic circuits, or comprise several groups of instructions, certain of which may be integrated to the inertial measurement units themselves.”] for storing calibration coefficients separately for each of the plurality of MEMS sensors [Paragraph [0076],[0083] – “each navigation filter is configured to determine an estimate of a state vector of the system, one of the components of this state vector being said positioning parameter, another component comprising one of the following magnitudes… a calibration parameter of said common sensor,” – stored calibration coefficients],
real-time coefficients for each of the plurality of MEMS sensors [Glevarec, Paragraph [0062], [0067] – “the fusion module is configured to determine an estimate of a covariance matrix of said mean estimate, as a function…the respective correction gains of the different navigation filters are equal to a same common correction gain…”; Paragraph [0162] – “As can be seen in FIG. 2, the electronic processing unit 10 10 comprises several modules, including the navigation filters F.sub.1.sup.1, . . . , F.sub.i.sup.k, . . . , F.sub.N.sup.N mentioned hereinabove, and fusion modules F.sub.us.sup.1, . . . , F.sub.us.sup.k, . . . , F.sub.us.sup.N.”],
and data blending instructions [Glevarec, Paragraph [0163] – “Each module of the processing unit 10 may be made by means of a set of dedicated electronic components and/or by means of a set of instructions stored in the memory (or in one of the memories) of the processing unit 10. ”] for blending the outputs of the plurality of MEMS sensors [Paragraph [0071] – “the fusion module is configured so as, for each set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter determined by the individual navigation filters, to determine a mean estimate of the positioning parameter equal to the mean of the k corrected estimates of the positioning parameter included in said set;” – output is the mean of all corrected sensors];
and a processor, coupled to the storage medium and the plurality of MEMS sensors [Adams, Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]], configured to execute program instructions to [Adams, Col. 8, Ln. 45-62 – “FIGS. 2 and 3 are flowcharts showing example processes involving techniques as described herein. The processes illustrated in FIGS. 2 and 3 may be described with reference to components and elements described above with reference to FIG. 1 for convenience and ease of understanding. In addition, the process in FIG. 2 may be described with respect to additional pictorials that are included in FIG. 2. However, the processes illustrated in FIGS. 2 and 3 are not limited to being performed using these components, and the components are not limited to performing the processes illustrated in FIGS. 2 and 3. The processes illustrated in FIGS. 2 and 3 are illustrated as a logical flow graph, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations.”] filter, at a frame rate [Col. 6, Ln. 28-36 – “In examples, the filter 148 may output filtered IMU data 154 associated with the IMUs 122, 124, and 126, such as filtered values associated with a gyroscope and/or accelerometer in the x-axis 114, the y-axis 116, and/or the z-axis 118. For instance, the filtered IMU data 154 may include filtered IMU A data that is sampled from the IMU A data 128, filtered IMU B data that is sampled from the IMU B data 130, and filtered IMU C data that is sampled from the IMU C data 132…”], samples output by the plurality of MEMS sensors with a bandpass filter [Col. 6, Ln. 9-16 – “In some examples, the present disclosure includes a filter 148 that filters the transformed IMU data 146. The filter 148 may include various types of filters, such as a high-frequency filter 150 to reduce sensor noise in the data and a low-frequency filter 152 to reduce sensor bias in the data. In some examples, the high-frequency filter 150 and the low-frequency filter 152 may be implemented as a bandpass filter.”] over a time scale characteristic of a type of error for the plurality of MEMS sensors to produce filtered samples [Col. 6, Ln. 43 – Col. 7, Ln. 3 – “Parameters associated with, and implementation of, the filter 148 may be optimized in various manners. For example, time delays may be inserted at various steps to tune outputs, such as by waiting a period of time (e.g., or for a quantity of data values to be received) to receive a sufficiently robust data set. Such time delays may also be introduced to account for differences in starting times for the IMUs, to account for differences in electric path lengths from the IMUs to the receiving computing system, and the like. For instance, in some examples, the filter may include an exponential filter with an optimized time constant (e.g., between about 0.5 seconds and about 1.0 seconds). In some examples, such as at startup, the high-frequency filter 150 may start filtering the high-frequency data at a time earlier in the time series than the start of filtering by the low-frequency-filter 152. That is, based on a time constant associated with the filter(s), the filter(s) may delay for a duration (e.g., 0.8 seconds) before starting to filter the low-frequency data to allow time to receive a larger sample size, which may increase the likelihood that sensor bias is accurately modeled. In some examples, a delay associated with the time constant is shorter than time duration associated with a downstream component receiving sensor data using the sensor data for a downstream process (e.g., localization, trajectory planning, etc.). As such, the delay is configured to be short enough to detect an error before potentially inaccurate data is communicated to other systems.”];
iteratively update a covariance matrix, at an accumulation rate [Glevarec, Paragraph [0235] – “During the successive executions of step b1), the navigation filter F.sub.i.sup.k determines step by step, iteratively, the estimated error {circumflex over (x)}.sub.n.sup.k,i at the calculation step n, as well as an estimate P.sub.n.sup.k,i of the covariance matrix of the error vector x.” – see also Fig. [3] step b1; Paragraph [0239]-[0240] – “…a step b12) of updating this error (sometimes called adjustment step), during which the estimated error is adjusted on the basis of the measurements provided by the common sensors C1, . . . , Cp. The navigation filter F.sup.k.sub.i executes the propagation and updating steps at each calculation step.”] until a selected number of filtered samples have been processed [Paragraph [0226] – “…the filter executing steps a) and b) several times successively.”], based on the filtered samples until a selected number of filtered samples have been processed [Paragraph [0226] – “…the filter executing steps a) and b) several times successively.”],
calculate, based on the covariance matrix, changes to the real-time coefficients to be applied to the output of each MEMS sensor of the plurality of MEMS sensors [Glevarec, Paragraph [0045] – “each individual navigation filter is configured to that, during a first execution of step a), the covariance matrix P.sub.n.sup.k,i of the deviation between said estimate of the positioning parameter of the system and this positioning parameter is estimated as a function of an initial covariance matrix, and of a propagation noise matrix…”; [0048]-[0052] – “the positioning system further comprises, for each inertial measurement unit, a conventional Kalman filter configured to: determine an additional estimate of said positioning parameter of the system, on the basis of the inertial signal provided by said inertial measurement unit, estimate an additional covariance matrix P.sub.n.sup.1,i of a deviation between said additional estimate and said positioning parameter of the system, and to determine a corrected additional estimate of said positioning parameter by adding to said previously determined additional estimate a corrective term equal to an additional correction gain multiplied by a difference between, on the one hand, said measurement of the positioning parameter of the system and, on the other hand, the product of the measurement matrix multiplied by said additional estimate or multiplied by a sum of said additional estimate and of an additional estimated error affecting said additional estimate, the additional correction gain being determined as a function of the variance of the measurement noise of the common sensor; the additional correction gain is equal to the following quantity: P.sub.n.sup.1,i H.sub.n.sup.T (S.sup.1.sub.n).sup.−1, where H.sub.n.sup.T is the transposed matrix of the measurement matrix H.sub.n and where (S.sup.1.sub.n).sup.−1 is the inverse of an additional innovation covariance matrix S.sup.1.sub.n equal to H.sub.n P.sub.n.sup.1,i H.sub.n.sup.T+R.sub.n, R.sub.n being said variance of said measurement noise” – correction gain is the real-time coefficient calculated from covariance matrix and additional correction gain is the change in real-time coefficient];
apply, at the frame rate [Glevarec, Paragraph [0093] – “each individual navigation filter executing several times successively the set of steps a) and b) without taking into account said mean estimate determined by the fusion module”; Paragraph [0281] – “where Δt is a time step (between two successive calculation steps)” – time between calculation steps is time between successive execution of steps a) and b)], the changes to the real-time coefficients [Paragraph [0067] – “the respective correction gains of the different navigation filters are equal to a same common correction gain whose value is, at each repetition of steps b), calculated only once for all of said navigation filters”; Paragraph [0091] – “b) determines a corrected estimate of said positioning parameter by adding to said estimate previously determined at step a) a corrective term equal to a correction gain multiplied by a difference between, on the one hand, said measurement of the positioning parameter of the system, and, on the other hand, the product of a measurement matrix multiplied by said estimate of the positioning parameter or multiplied by a sum of said estimate and of an estimated error affecting said estimate…” – additional correction gain is applied];
and calculate a blended output for the plurality of MEMS sensors based on the real-time coefficients [Glevarec, Paragraph [0091]-[0092] –” b) determines a corrected estimate of said positioning parameter…and at least one fusion module determines a mean estimate of said positioning parameter of the system by calculating a mean of a given number of said corrected estimates of the positioning parameter, said number being higher than or equal to two and lower than or equal to the number of inertial measurement units that are included in the system”].
Regarding Claim 15, the combination of Glevarec and Adams discloses the IMU of claim 13 [Glevarec, Paragraph [0025]-[0027] – “…the present invention proposes a positioning system comprising: several inertial measurement units, each inertial measurement unit comprising at least one inertial sensor, accelerometer or gyrometer, configured to provide an inertial signal representative of an acceleration or an angular speed of rotation of the inertial measurement unit, at least one common sensor, configured to provide a measurement of a positioning parameter of the system…”; Adams, Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]], wherein calculating the blended output for the plurality of MEMS sensors comprises: calculating a first output using the real-time coefficients [Glevarec, Paragraph [0068]-[0069] – “the inertial measurement units are N in number, and the fusion module is configured to: determine said mean estimate of the positioning parameter of the system by calculating the mean of a set of k corrected estimates of said positioning parameter, among the N corrected estimates of this positioning parameter respectively determined by the individual navigation filters, the integer number k being lower than or equal to N”];
calculating a second output [Glevarec, Paragraph [0070] – “determine at least another mean estimate of the positioning parameter of the system, by calculating the mean of another set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter respectively determined by the individual navigation filters.”] using the calibration coefficients [Paragraph [0076], [0080], [0083] – “each navigation filter is configured to determine an estimate of a state vector of the system, one of the components of this state vector being said positioning parameter, another component comprising one of the following magnitudes…a calibration residue parameter of one of the inertial measurement units…a calibration parameter of said common sensor…”];
and blending the first output with the second output to provide a blended output [Glevarec, Paragraph [0071] – “the fusion module is configured so as, for each set of k corrected estimates of said positioning parameter among the N corrected estimates of this positioning parameter determined by the individual navigation filters, to determine a mean estimate of the positioning parameter equal to the mean of the k corrected estimates of the positioning parameter included in said set”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate and blend successive outputs as disclosed by Gleravec in the IMU sensor blending disclosed by Glevarec and Adams, to provide a real-time (time varying) output.
Claim 14 is rejected under Glevarec et. al., in view of Adams et. al., in further view of Nakaoka et. al.
Regarding Claim 14, the combination of Glevarec and Adams discloses the IMU of claim 13 [Glevarec, Paragraph [0025]-[0027] – “…the present invention proposes a positioning system comprising: several inertial measurement units, each inertial measurement unit comprising at least one inertial sensor, accelerometer or gyrometer, configured to provide an inertial signal representative of an acceleration or an angular speed of rotation of the inertial measurement unit, at least one common sensor, configured to provide a measurement of a positioning parameter of the system…”; Adams, Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]], when one MEMS sensor of the plurality of MEMS sensors fails [Adams, Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]; Col. 4, Ln. 54-58 – “Even though multiple IMUs may not be needed, based on examples of the present disclosure, multiple IMUs may provide redundancy to the vehicle 102 (e.g., backup in case one of the IMUs fails) and may allow errors in IMU data to be detected.”].
While the combination of Glevarec and Adams does not expressly disclose wherein, when a sensor has failed, setting a diagonal associated with the one MEMS sensor of the plurality of MEMS sensors in the covariance matrix to a high number compared to other values in the covariance matrix and setting off-diagonal terms to zero,
Nakaoka does, however, set up the covariance matrix [Nakaoka, Paragraph [0062] – “Initial values of the state vector × and the error covariance matrix .Math..sub.x.sup.2 are given as in Expression (30).” – See error covariance matrix in Expression [30], where the diagonals are set to a high number and off-diagonals are set to zero].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to set the diagonals of the error covariance matrix to high values, as demonstrated in the covariance matrix in Nakaoka, to remove failed sensors from the MEMS sensor array as disclosed by the combination of Glevarec and Adams to improve accuracy of sensor blending in the IMU.
Claim 16 is rejected under Glevarec et. al., in view of Adams et. al., in further view of Foxlin et. al.
Regarding Claim 16, the combination of Glevarec and Adams discloses the IMU of claim 15 [Glevarec, Paragraph [0025]-[0027] – “…the present invention proposes a positioning system comprising: several inertial measurement units, each inertial measurement unit comprising at least one inertial sensor, accelerometer or gyrometer, configured to provide an inertial signal representative of an acceleration or an angular speed of rotation of the inertial measurement unit, at least one common sensor, configured to provide a measurement of a positioning parameter of the system…”; Adams, Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]].
The combination does not disclose that blending the first output with the second output comprises: applying a high pass filter to the first output, applying a low pass filter to the second output, or combining an output of the low pass filter and an output of the high pass filter to produce the blended output for the plurality of MEMS sensors.
However, Foxlin discloses that blending the first output with the second output comprises: applying a high pass filter to the first output [Col. 2, Ln. 17-25 – “For example, in one particular embodiment, two gyroscopes having comparable ranges are fused, but one has significantly lower noise while the other has better bias stability. In this case it is desirable to output the signal from the low-noise "main" gyro while effectively "replacing" or "training" its bias with that of the more stable sensor. This can be accomplished with a complementary filter that passes through the high-frequency content from the low-noise sensor…”];
applying a low pass filter to the second output [Col. 2, Ln. 25-26 – “…and the low-frequency "bias" of the other sensor.”];
and combining an output of the low pass filter and an output of the high pass filter to produce the blended output [Col. 2, Ln. 17-20 – “For example, in one particular embodiment, two gyroscopes having comparable ranges are fused, but one has significantly lower noise while the other has better bias stability… This can be accomplished with a complementary filter…”] for the plurality of MEMS sensors [Adams, Col 10, Ln. 37-40 – “For example, the sensors 302 and 304 may include gyroscopes, accelerometers, magnetometers, pressure sensors, and/or various Micro Electro-Mechanical Systems (MEMS).” – see also Fig. [3]].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement a complementary filter as taught by Foxlin in order to calculate the blended output, as disclosed by the combination of Glevarec and Adams, in order to improve the accuracy of sensor blending output in an IMU.
Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20150039220 A1, Georgy, J., Method And Apparatus For Improved Navigation Of A Moving Platform, 2015.
US 10025891 B1, Zaki, A. S., Method Of Reducing Random Drift In The Combined Signal Of An Array Of Inertial Sensors, 2018.
US 20180231385 A1, Fourie, D., Inertial Odometry With Retroactive Sensor Calibration, 2018.
US 20110238308 A1, Miller, I. T., Pedal navigation using LEO Signals and Body-mounted sensors, 2011.
US 7579984 B2, Wang, H.G., Ultra-tightly Coupled GPS And Inertial Navigation System For Agile Platforms, 2009.
US 8762091 B1, Foxlin, E., Inertial Measurement System, 2014.
US 6175807 B1, Buchler, R.J., Temperature Compensation Method For Strapdown Inertial Navigation Systems, 2001.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANELLE A HOLMES whose telephone number is (571)272-4336. The examiner can normally be reached Monday - Friday 8:00 m - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen M Vazquez can be reached at (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.A.H./Examiner, Art Unit 2857
/ARLEEN M VAZQUEZ/Supervisory Patent Examiner, Art Unit 2857