DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
As to claim 1, the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
101 Analysis – Step 1
The claim recites a method including at least one process claim. The claim falls within
one of the four statutory categories. See MPEP 2106.03
Claim 1 is directed to a mental process of coning, sculling and scrolling error compensation.
101 Analysis – Step 2A, Prong 1
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea – mental process (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites:
A coning, sculling and scrolling error compensation method for strapdown navigation system comprising the following steps:
step 1: select a calculation function; in this step, a compensation for errors caused by motion affecting angle calculation errors is determined based on a required angle calculation accuracy and a computational load (time) achieved using theoretical error calculation charts;
choice of calculation function, whether an approximation or exact function of the differential equation below, depends on a desired accuracy, a computational load, and a convenience of implementing the computation in embedded systems,
PNG
media_image1.png
114
483
media_image1.png
Greyscale
here ω is an angular velocity is obtained from an inertial measurement unit (IMU), ϕ is a rotation vector, | | denotes a vector magnitude, × denotes a vector cross product, ϕ˙ denotes a derivative; with support of computational resources regarding error margins and computational load (time), selecting an appropriate approximation function for compensating motion-induced errors in angle calculations ;
step 2: choose a calculation frequency and a differential equation solver;
the calculation frequency depends on a motion frequency of a mechanical object, which is usually known prior to application design or environment use, or determined using frequency spectrum analyzers; for inertial sensor systems, with a cutoff frequency f1 or additional vibration damping at frequency f2, the calculation frequency used is typically 3-5 times the minimum of f1 and f2; the differential equation solver method, though not limited to, is a dormand-drince method of order at least five;
step 3: initialize initial values for strapdown inertial navigation system in a chosen reference frame;
the initial values required are an initial position, a velocity, an and angle of the mechanical object before computation; from these initial values of position, velocity, and angle, compute initialization values (Φ, η, and ζ) according to related equations:
PNG
media_image2.png
70
261
media_image2.png
Greyscale
PNG
media_image3.png
39
285
media_image3.png
Greyscale
where q – is a four dimensional vector (quaternion), η – is a velocity compensation vector, ζ – is a position compensation vector in a position calculation equation, B – is a body frame, I – is an identity matrix (3x3), SF – is a specific force, v – is velocity, p – is position, f2,f3,f4- are functions depending on the rotation vector Φ;
continuously update angular velocity values (ω), and acceleration values (fibb) by the inertial measurement unit (IMU) sensor system; decomposing a final overall error into three components: an error due to incorrect initialization (such as errors in position, velocity, and angle for the object in the initialization method of the positioning system, which typically uses a highly accurate master device), an error due to measurement errors of the IMU sensor, and an error from solving the differential equations (describing the object's dynamics);
step 4: calculate a coning error compensation; at this step, calculate compensation for errors due to motion that result in angular computation errors, after selecting the function in step 1, the frequency and differential equation-solving method in step 2, and the initial values in step 3;
criteria for selecting the function in step 1 depend on a required error and a computational load (time) for each calculation cycle, the criteria for selecting the computation frequency depend on the cutoff frequency used in the inertial measurement sensor (i.e., the sensor's usage limit) and a design or application of a device containing the inertial measurement sensor;
based on a previous value of Φ (at time t) and the angular velocity ω from the IMU sensor (at time t), calculate the value of Φ at a next time step (at time t+0.01 seconds), this value represents the correction for motion-induced errors in angle calculation that needs to be computed;
step 5: calculate the quaternion, DCM matrix, and Euler angles; at this step, calculate a four-dimensional rotation (quaternion), a direction cosine matrix (DCM), and Euler angles based on the motion error compensation for angular calculation obtained in step 4;
the purpose of this step is to calculate the Euler angles, direction cosine matrix (DCM), or four-dimensional rotation (quaternion) at the next time step (t + 0.01 seconds), following the calculation of the motion error compensation for angular calculation obtained in step 4 (at t+0.01 seconds); these relationships are described by the following equations:
PNG
media_image4.png
77
270
media_image4.png
Greyscale
where C – is a direction cosine matrix; step 6: compute the rotation vector value; at this step, perform the following: calculate the rotation vector Φ with the equation:
PNG
media_image5.png
60
149
media_image5.png
Greyscale
where Φp is the rotation vector obtained from step 4, also computing the rotation vector Φ˙ with the equation:
PNG
media_image6.png
49
267
media_image6.png
Greyscale
the purpose of this step is to calculate the rotational vector value and the rotational vector velocity (at time t) based on the motion error compensation that results in angular computation errors obtained in step 4 (at time t), this is done to incorporate the values into the motion error compensation for velocity and position, which is performed in step 7;
step 7: compute the sculling and scrolling error compensation; at this step, perform the calculation of motion error compensation for velocity and position by solving the differential equations:
PNG
media_image7.png
93
446
media_image7.png
Greyscale
where fibb – is a specific force vector output from the inertial measurement unit (IMU) ; according to the computation frequency, the numerical method for solving the differential equations chosen in step 2, and the initial values in step 3;
leverage the high accuracy (controlled in step 1) in the motion error compensation that causes angular calculation errors computed in step 4 (at time t), the motion error compensation for velocity and position is computed in this step by using a combined buffer (rotation vector and rotation vector rate) from step 6 (at time t); implement a ODE8 (dormand-prince) method of order 8 to solve the differential equations and compute the compensation for velocity and position errors at time t+0.01 seconds;
step 8: calculate velocity and position outputs in a computational coordinate system;
for the NED (North-East-Down) coordinate system, after obtaining the direction cosine matrix (DCM) from step 5 and the velocity and position offsets from step 7;
this step aims to update new values of position, velocity, and angle (at time t+0.01) after calculating offsets due to motion-induced errors in angle, velocity, and position, using the relationships described by the equations:
PNG
media_image8.png
110
297
media_image8.png
Greyscale
in a next computation cycle (at time t+0.02 seconds and onwards), perform steps 4 through 8 are sequentially until the positioning computation process (providing the position, velocity, and angle of the device containing the inertial measurement sensor over time) is completed.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “calculating motion error compensation for a velocity and position by solving the differential equation:”
PNG
media_image7.png
93
446
media_image7.png
Greyscale
in the context of this claim, a person is able to solve the equations in the head if a few variables are zero. Accordingly, the claim recites at least one abstract idea – mental process.
101 Analysis – Step 2A, Prong 2
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”) See above.
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Claim 1 does not include a processing apparatus. The processing is recited at a high level of generality and merely automates the determining process steps.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above, with respect to integration of the mental process into a practical application, the claim does not include additional element of using a processor or using a generic computer component. Generally, applying an exception using a generic computer component cannot provide an inventive concept.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well understood, routine, conventional activity in the field. No additional element of using a processor or using a generic computer component provided.
Therefore, claim 1 is ineligible under 35 USC §101. Examiner recommends a controlling step with processing apparatus.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BESUFEKAD LEMMA TESSEMA whose telephone number is (571)272-6850. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at 5712727298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BESUFEKAD LEMMA TESSEMA/Examiner, Art Unit 3665
/HUNTER B LONSBERRY/Supervisory Patent Examiner, Art Unit 3665