DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
This Office Action is in response to the claims filed on January 12, 2026.
Claims 1-20 have been presented for examination.
Claims 1-20 are currently rejected.
Claims 1-20 are rejected under 35 U.S.C. 101.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ahn et al. (U.S. Patent Publication Number 2019/0219699) in view of Ross et al. (U.S. Patent Publication Number 2021/0107537).
Response to Argument
Objection to the Specification
A new Title has been submitted on January 12, 2026 and overcomes the objection. Accordingly, the objection to the specification is withdrawn.
35 U.S.C. 101
Applicant's arguments filed on January 12, 2026 have been fully considered but they are not persuasive.
The Applicant argues that the features of the claims cannot be performed in the human mind. Specifically, the Applicant argues that the inclusion of a computing device and one or more sensors of the vehicle to provide vehicle pose information for vehicle navigation cannot be performed in the human mind and therefore do not fall within the “mental process” grouping of abstract ideas (Applicant Remarks page 10).
The Examiner has considered the arguments presented and respectfully disagrees. First, the “computing device” and “one or more sensors” of the vehicle are recited at a high level of generality and do not amount to more than mere generic computing components used to “apply” the otherwise mental judgement into a general-purpose computing environment. Further, these additional elements are merely generic computer components used to automate the determining step and do not integrate the judicial exception into a practical application. For example, the “determining, by the computing device and using a first model” merely uses the generic computing device and a model to perform the determination, which may be performed mentally by a person, or with the aid of pen and paper. Therefore, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The Applicant further argues that claim 1 is integrated into practical application of autonomous driving or assisted driving and contributes to an improved technical solution to the technical field of autonomous driving by providing improved stability and reliability of the vehicle navigation process. (Applicant Remarks page 10).
The Examiner has considered the arguments presented and respectfully disagrees. In Berkheimer v. HP INC., 881 F. 3d 1360 (Fed. Cir. 2018), the federal circuit held that improvements are only considered “to the extent they are captured in the claims.” Berkheimer at 1369. Such improvements to “stability” or “reliability” are not captured in the language of the claims; therefore, the Applicant’s argument is not persuasive. Even so, the claims do not recite autonomous driving or assisted driving elements. Therefore, the Applicant’s arguments are not persuasive.
For these reasons, the Examiner maintains the 35 U.S.C. 101 rejection.
35 U.S.C. 102
The Applicant’s arguments, see Applicant Remarks filed on January 12, 2026, appear to be primarily directed to the amended claim language. The Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because amendments shift the scope of claims and necessitate a new ground of rejection, which is made in view of Ross et al. (U.S. Patent Publication Number 2021/0107537).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1
Claim 1. A method for data processing, wherein the method is applied to a vehicle, the vehicle comprises one or more sensors and a computing device, and the method comprises:
determining, by the computing device and using a first model, first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment, wherein the first model is a pose estimation model from the first moment to the second moment;
obtaining, by the computing device, data collected by the one or more sensors;
determining, by the computing device and using a first filter, second pose information of the vehicle at the second moment based on the first pose information and the data; and
performing, by the computing device, a navigation process for the vehicle based on the second pose information.
101 Analysis - Step 1: Statutory category – Yes
The claim recites a method including at least one step. The claim falls within one of the four statutory categories. See MPEP 2106.03.
101 Analysis - Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III)
The claim recites the limitation of determining first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment, wherein the first model is a pose estimation model from the first moment to the second moment; determining second pose information of the vehicle at the second moment based on the first pose information and the data; and performing a navigation process for the vehicle based on the second pose information.
This limitation, as drafted, is a simple process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of “by the computing device,” “by the computing device,” and “by the computing device.” That is, besides these elements, nothing in the claim elements precludes the step from practically being performed in the mind. For example, but for the “by the computing device,” “by the computing device and using a first model,” and “by the computing device” elements, the claim encompasses a person looking at data collected and forming a simple judgement. Specifically, the claim encompasses a person using the observed pose information at a later time to mentally determine a first pose information at a first moment, such as visually determining where the vehicle is traveling from. The person may alternatively perform the determination using a simple equation, which is a model, with the aid of pen and paper, and similarly performing the determination for second pose information at another later time, and filtering the data by, for example, only manually selecting relevant information. As a further example, the person may perform the navigation process by providing a list or a physical map of navigation steps for the vehicle based on a known second pose information. The mere recitation of “by the computing device,” “by the computing device and using a first model,” and “by the computing device and using a first filter” does not take the claim limitations out of the mental process grouping.
Thus, the claim recites a mental process.
101 Analysis - Step 2A Prong two evaluation: Practical Application - No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application.
The claim recites additional element or step of obtaining, by the computing device, data collected by the one or more sensors.
The obtaining step is recited at a high level of generality (i.e. as a general means of data gathering), and amount to mere data gathering, which is a form of insignificant extra-solution activity. The vehicle comprising one or more sensors is also recited at a high level of generality and merely describes how to generally “apply” the otherwise mental judgements using a generic or general-purpose vehicle processing environment, i.e. a computer.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
101 Analysis - Step 2B evaluation: Inventive concept - No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving steps and the displaying step were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The background recites that the sensors are all conventional sensors mounted on the vehicle, and the specification does not provide any indication that the vehicle controller is anything other than a conventional computer within a vehicle. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Accordingly, a conclusion that the collecting step is well-understood, routine, conventional activity is supported under Berkheimer.
Thus, the claim is ineligible.
Claim 8
Claim 8. An apparatus for data processing, comprising:
at least one processor; and
one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to perform the following operations;
determining first pose information of a vehicle at a second moment based on pose information of the vehicle at a first moment by using a first model, wherein the first model is a pose estimation model from the first moment to the second moment, and the first model includes a vehicle-based kinematic model;
obtaining data collected by one or more sensors;
determining second pose information of the vehicle at the second moment based on the first pose information and the data by using a first filter; and
performing a navigation process for the vehicle based on the second pose information.
101 Analysis - Step 1: Statutory category – Yes
The claim recites a method including at least one step. The claim falls within one of the four statutory categories. See MPEP 2106.03.
101 Analysis - Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III)
The claim recites the limitation of determining first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment, wherein the first model is a pose estimation model from the first moment to the second moment; determining second pose information of the vehicle at the second moment based on the first pose information and the data; and performing a navigation process for the vehicle based on the second pose information.
This limitation, as drafted, is a simple process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind. For example, the claim encompasses a person looking at data collected and forming a simple judgement. Specifically, the claim encompasses a person using the observed pose information at a later time to mentally determine a first pose information at a first moment, such as visually determining where the vehicle is traveling from. The person may alternatively perform calculations using a known kinematic model for a given vehicle, with the aid of pen and paper, and similarly performing the determination for second pose information at another later time, and filtering the data by, for example, only manually selecting relevant information. As a further example, the person may perform the navigation process by providing a list or a physical map of navigation steps for the vehicle based on a known second pose information.
Thus, the claim recites a mental process.
101 Analysis - Step 2A Prong two evaluation: Practical Application - No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application.
The claim recites additional element or step of at least one processor; and one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to perform the following operations.
The obtaining step is recited at a high level of generality (i.e. as a general means of data gathering), and amount to mere data gathering, which is a form of insignificant extra-solution activity. The “at least one processor” and “one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor” is also recited at a high level of generality and merely describes how to generally “apply” the otherwise mental judgements using a generic or general-purpose vehicle processing environment, i.e. a computer.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
101 Analysis - Step 2B evaluation: Inventive concept - No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving steps and the displaying step were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The background recites that the sensors are all conventional sensors mounted on the vehicle, and the specification does not provide any indication that the vehicle controller is anything other than a conventional computer within a vehicle. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Accordingly, a conclusion that the collecting step is well-understood, routine, conventional activity is supported under Berkheimer.
Thus, the claim is ineligible.
Claim 15
Independent claim 15 recites similar limitations to those contained in claim 8 and is rejected under 35 U.S.C. 101 for the same rationale provided above. The additional elements “at least one processor,” “one or more memories,” “data interface,” and “the one or more memories are coupled to the at least one processor through the data interface and store programming instructions for execution by the at least one processor,” are recited at a high level of generality and merely describes how to generally “apply” the otherwise mental judgements using a generic or general-purpose vehicle processing environment, i.e. a computer. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Thus, the claim is ineligible.
Dependent Claims
Dependent claims 2-7, 9-14, and 16-20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-7, 9-14, and 16-20 are not patent eligible under the same rationale as provided for in the rejection of the independent claims.
Therefore, claims 1-20 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ahn et al. (U.S. Patent Publication Number 2019/0219699) in view of Ross et al. (U.S. Patent Publication Number 2021/0107537).
Regarding claim 1, Ahn discloses a method for data processing, wherein the method is applied to a vehicle, the vehicle comprises one or more sensors and a computing device (Ahn ¶ 14 discloses “localizers and pose state estimator are executed by a processor unit or processor units of a computing device or system, such as a computing device or system located at the vehicle. Localizers generate pose estimates from remote sensor data”), and the method comprises:
determining, by the computing device and using a first model (Ahn ¶¶ 41-43 discloses a prediction system 204 that “considers one or more vehicle poses generated by the pose system 230 and/or reference data 226” and includes “a machine-learned trajectory development model,” wherein “The vehicle autonomy system 202 includes one or more computing devices, such as the computing device 211, that may implement all or parts of the ... prediction system 204), first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment and a first model, (Ahn ¶ 14 discloses generating a vehicle pose for a set of time stamps, which includes a second moment, such that the “pose state estimator generates the vehicle pose based at least in part on a previous vehicle pose,” see ¶ 19, wherein the pose estimation uses a “a Kalman filter or similar algorithm,” see ¶ 19. Also see Fig. 4.)
wherein the first model is a pose estimation model from the first moment to the second moment; (Ahn ¶ 19 discloses “the pose state estimator may implement a Kalman filter or similar algorithm,” such that “the pose state estimator generates vehicle poses for the vehicle over different time stamps.”)
obtaining, by the computing device (Ahn ¶ 14), data collected by the one or more sensors; (Ahn ¶ 14 discloses “Localizers generate pose estimates from remote sensor data,” also see ¶ 13 “the pose system also receives motion sensor data from one or more motion sensors that sense the motion of the vehicle”)
Ahn does not expressly disclose:
determining, by the computing device and using a first filter, second pose information of the vehicle at the second moment based on the first pose information and the data; and
performing, by the computing device, a navigation process for the vehicle based on the second pose information.
However, Ross discloses:
determining, by the computing device and using a first filter, second pose information of the vehicle at the second moment based on the first pose information and the data; and (Ross ¶ 81 discloses “each position update provides an additional observation that can be used as an input to the error correction component 124 (e.g., a Kalman Filter or similar estimation algorithm)” wherein “both a first location determination component and a second location determination component can be configured and operated to send estimated current positions to the error correction component” based on a Kalman filter linear dynamic model, also see ¶ 81, such that “with the Kalman filter 226, can estimate a current location of the train 200 based on a known starting position and measurements of the direction and velocity of the train 200 over time [i.e., a second pose information],” see ¶ 101. Also see Fig. 1 depicting using the sensor data to determine second location information)
performing, by the computing device, a navigation process for the vehicle based on the second pose information. (Ross ¶ 69 discloses “Upon currently operating a vehicle (e.g., vehicle 100) about a current predetermined path, the trusted accident avoidance control system 120, and particularly a tachometer operating as one of the first or second location determination components 122-1 or 122-2, can be operated
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the determination of pose information of Ahn with using a first filter and a computing device to determine second pose information of the vehicle at the second moment based on the first pose information and the data, as disclosed by Ross, with reasonable expectation of success, to provide the most accurate and reliable velocity management of the vehicle 100 at any given time as it travels along the predetermined path (Ross ¶ 93), and to minimize statistical error of a final estimate (Ross ¶ 81), rendering the limitation to be an obvious modification.
Regarding claim 2, Ahn in combination with Ross discloses the method according to claim 1, wherein the determining first pose information of the vehicle at a second moment based on pose information of the vehicle at a first moment and a first model comprises:
determining an initial state transition matrix of the vehicle at the first moment based on the pose information of the vehicle at the first moment and the first model; and (Ahn ¶ 14 discloses that data may be arranged in a table [i.e., a matrix], the data including sensor data corresponding to vehicle pose, such that “the pose state estimator generates the vehicle pose for that time stamp based at least in part on the pose estimate or estimates from that time stamp,” see ¶ 19.)
determining the first pose information based on the initial state transition matrix. (Ahn ¶ 14 discloses generating “pose estimates from ... reference data,” wherein the reference data is arranged in a table [i.e., a matrix])
Regarding claim 3, Ahn in combination with Ross discloses the method according to claim 1, wherein the method further comprises:
determining first covariance information of the vehicle at the second moment based on covariance information of the vehicle at the first moment and the first model; and (Ahn ¶ 22 discloses estimating a covariance indicator for the first pose estimate, wherein estimating a pose state includes executing a Kalman filter or other model that considers covariance indicators for the previous vehicle poses, see ¶ 61.)
determining second covariance information of the vehicle at the second moment based on the first covariance information and the data. (Ahn ¶ 22 discloses estimating a covariance indicator for a second pose estimate, wherein estimating a pose state includes executing a Kalman filter or other model that considers covariance indicators for the previous vehicle poses, see ¶ 61.)
Regarding claim 4, Ahn in combination with Ross discloses the method according to claim 1, wherein:
before the determining second pose information of the vehicle at the second moment based on the first pose information and the data, the method further comprises: obtaining a first calibration result, wherein the first calibration result comprises at least one of an online calibration result or an offline calibration result; and (Ahn ¶ 88 discloses that “the architecture 800 may operate in the capacity of either a server or a client machine in server-client network environments [i.e., online],” such that the pose estimator receives a pose estimate in step 304, see Fig. 3, before determining a pose for a next time stamp in step 306, wherein “The localizer finds the best-fit x/y/yaw projection by identifying the x/y/yaw projection with the lowest overall error [i.e., calibration result].” One having ordinary skill in the art would recognize that receiving the pose estimate is a calibration result because calibration merely includes adjusting precisely, see Merriam-Webster “calibrate,” which is disclosed by Ahn in finding the best-fit projection.)
the determining second pose information of the vehicle at the second moment based on the first pose information and the data comprises: performing error compensation on the data based on the first calibration result to obtain error-compensated data; and (Ahn ¶ 19 discloses that the “pose state estimator generates the vehicle pose based at least in part on a previous vehicle pose,” such that “The localizer selects the roll/pitch/height projection that minimizes the error [i.e., performing error compensation] between the ground intensity data and the ground reflectivity map to find a best-fit roll/pitch/height projection,” see ¶ 75)
the second pose information based on the first pose information and the error-compensated data. (Ahn ¶ 19 discloses that the “pose state estimator generates the vehicle pose based at least in part on a previous vehicle pose,” the pose estimation being performed by a first and second localizer which select “the roll/pitch/height projection that minimizes the error [i.e., error compensation] between the ground intensity data and the ground reflectivity map to find a best-fit roll/pitch/height projection,” see ¶ 75)
Regarding claim 5, Ahn in combination with Ross discloses the method according to claim 4, wherein:
the first calibration result comprises one or more of a wheel speed scale coefficient, a zero offset of an inertial measurement unit (IMU), or a lever arm parameter. (Ahn ¶ 27 discloses “The pose state estimator 138 also receives motion sensor data from one or more motion sensors such as, for example, an inertial measurement unit (IMU) 139”)
Regarding claim 6, Ahn in combination with Ross discloses the method according to claim 4, wherein:
before the performing error compensation on the data based on the first calibration result, the method further comprises: performing a check on the data, wherein the check comprises one or more of a rationality check or a cross-check. (Ahn ¶ 74 discloses correlating the points to ground point data [i.e., performing a check on the data including a cross-check] such that the correlated data would describe an error, also see ¶ 76 disclosing determining a covariance indicator based on the errors.)
Regarding claim 7, Ahn in combination with Ross discloses the method according to claim 1, wherein:
the determining second pose information of the vehicle at the second moment based on the first pose information and the data (Ahn ¶ 19) comprises: performing an optimal estimation based on the first pose information and the data to obtain the second pose information. (Ahn ¶ 7 discloses that the pose estimates are generated by pose localizers, wherein the localizers finds the best-fit [i.e., performing an optimal estimation] x/y/yaw projection by identifying the x/y/yaw projection with the lowest overall error.)
Regarding claim 8, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 1 for the reasons discussed above. In addition, Ahn further discloses at least one processor (Ahn ¶ 14); and one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 1 (Ahn ¶ 51). Ross further discloses “a vehicle-based kinematic model” (Ross ¶ 81 “The statistical “optimal estimation algorithm” (i.e. Kalman Filter) then applies a linear dynamic model (which models the kinematic dynamic process)”.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the determination of pose information of Ahn with using a first filter and a kinematic model to determine second pose information of the vehicle at the second moment based on the first pose information and the data, as disclosed by Ross, with reasonable expectation of success, to minimize statistical error of a final estimate (Ross ¶ 81) and smooths out statistical errors from the sequence of incoming measurements (Ross ¶ 99), rendering the limitation to be an obvious modification.
Regarding claim 9, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 2 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 2 (Ahn ¶¶ 14 and 51).
Regarding claim 10, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 3 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 3 (Ahn ¶¶ 14 and 51).
Regarding claim 11, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 4 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 4 (Ahn ¶¶ 14 and 51).
Regarding claim 12, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 5 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 5 (Ahn ¶¶ 14 and 51).
Regarding claim 13, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 6 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 6 (Ahn ¶¶ 14 and 51).
Regarding claim 14, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 7 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 7 (Ahn ¶¶ 14 and 51).
Regarding claim 15, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 1 for the reasons discussed above. In addition, Ahn further discloses at least one processor (Ahn ¶ 14); one or more memories (Ahn ¶ 51); and a data interface (Ahn ¶ 51), wherein the one or more memories are coupled to the at least one processor and storing programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 1 (Ahn ¶ 51).
Regarding claim 16, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 2 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 2 (Ahn ¶¶ 14 and 51).
Regarding claim 17, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 3 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 3 (Ahn ¶¶ 14 and 51).
Regarding claim 18, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 4 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 4 (Ahn ¶¶ 14 and 51).
Regarding claim 19, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 5 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 5 (Ahn ¶¶ 14 and 51).
Regarding claim 20, Ahn in combination with Ross discloses the parallel limitations contained in parent claim 6 for the reasons discussed above. In addition, Ahn further discloses one or more memories store programming instructions for execution by the at least one processor to perform the operations disclosed by parent claim 6 (Ahn ¶¶ 14 and 51).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHANIE T SU whose telephone number is (571)272-5326. The examiner can normally be reached Monday to Friday, 9:30AM - 5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached at (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHANIE T SU/Patent Examiner, Art Unit 3662