Prosecution Insights
Last updated: April 19, 2026
Application No. 17/542,699

MOTION STATE ESTIMATION METHOD AND APPARATUS

Final Rejection §101§103
Filed
Dec 06, 2021
Examiner
ALGEHAIM, MOHAMED A
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Shenzhen Yinwang Intelligent Technologies Co., Ltd.
OA Round
4 (Final)
59%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
81%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
122 granted / 207 resolved
+6.9% vs TC avg
Strong +22% interview lift
Without
With
+21.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 207 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-4, 6-9, 11-14, & 16-23 of U.S. Application No. 17/542699 filed on 10/28/2025 have been examined. Office Action is in response to the Applicant's amendments and remarks filed10/28/2025. Claims 1, 11 and 20 are presently amended and Claims 5, 10, & 15 are cancelled. Claims 1-4, 6-9, 11-14, & 16-23 are presently pending and are presented for examination. Response to Arguments In regards to the previous rejections under 35 U.S.C. § 101: the amendments to the claims do not overcome the previous 35 USC § 101 rejection. Applicant further argues on page. 10-11 of the Remarks, “ Claim 1 is not directed to a judicial exception because it cannot practically be performed in the human mind. Claim 1, as amended, is directed to a motion state estimation method based on data gathered by multiple sensors which are physical entities do not qualify as a mental process. Further, the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform, which requires computational capabilities that is beyond what a human could perform… Further, under step 2A (prong 2), claim 1 is patent eligible because it recites elements that integrate any exception into a practical application. Namely, the practical application provides an advanced method for motion state estimation based on the measurement data of the one or more target reference objects which is separated from the other measurement data using Hough transform. Because the unnecessary data is not considered during the motion state estimation, the accuracy and the efficiency are both improved… Applicant respectfully submits that it is inappropriate to include in this mental processing group any step that cannot be in fact performed mentally, and each claim should be evaluated as a whole in determining whether the claim integrates the alleged abstract idea into a practical application. Further, claim features that cannot realistically be executed in the human mind – that require computational capabilities beyond what a human could perform, i.e., the claimed Hough transform, shall not be included in the mental processing group.”. Examiner respectfully disagrees. Applicant is reminded claims must be given their broadest reasonable interpretation. Per the MPEP 2106.05(f) Mere Instructions to Apply An Exception, the courts have also identified limitations that did not integrate a judicial exception into a practical application: Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Examiner interprets that the processors, and radar sensors and searching the pixel features of a target reference object, are applying instructions in order to reach the end result of measuring the velocity of a vehicle using stationary objects. Using a computer or other machinery in its ordinary capacity for economic or other task (e.g., to receive, store, or transmit data) after the abstract idea does not integrate a judicial exception into a practical application or provide significantly more (see at least MPEP 2106.05(f)). Further, per the MPEP 2106.05(g) based on the examples given by the courts as insignificant extra-solution activity, such as: “Obtaining information about transactions using the Internet to verify credit card transactions”, “Consulting and updating an activity log” and other examples. These examples are court decisions that were considered mere data gathering and further displaying the data which is similar to what the claim set describes. The claim set describes a obtaining information on velocity of vehicle. Using equations or as the applicant recited the “Hough Transform” is merely an instruction with end means of obtaining information on an object. The application appears to have a generic computers for mere data gathering with insignificant extra solution activity with the end result being a conclusion of a parameter of a vehicle. Therefore, the previous 35 USC § 101 rejection is maintained. In regards to the previous rejection under 35 U.S.C. § 103: Applicant’s arguments with respect to the independent claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A new grounds of rejection is made in view of US 2018/0162389A1 (“Minemura”). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-4, 6-9, 11-14, & 16-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. A claim that recites an abstract idea, a law of nature, or a natural phenomenon is directed to a judicial exception. Abstract ideas include the following groupings of subject matter, when recited as such in a claim limitation: (a) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; (b) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and (c) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion). See the 2019 Revised Patent Subject Matter Eligibility Guidance. Even when a judicial element is recited in the claim, an additional claim element(s) that integrates the judicial exception into a practical application of that exception renders the claim eligible under §101. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The following examples are indicative that an additional element or combination of elements may integrate the judicial exception into a practical application: the additional element(s) reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; the additional element(s) that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition; the additional element(s) implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; the additional element(s) effects a transformation or reduction of a particular article to a different state or thing; and the additional element(s) applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Examples in which the judicial exception has not been integrated into a practical application include: the additional element(s) merely recites the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; the additional element(s) adds insignificant extra-solution activity to the judicial exception; and the additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. See the 2019 Revised Patent Subject Matter Eligibility Guidance. Claims 1, 11, & 20 recite obtaining a plurality of pieces of measurement data, wherein each of the plurality of pieces of measurement data comprises at least velocity measurement information, determining, from the plurality of pieces of measurement data, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data wherein each target reference object is recognized based on the measurement data of the second sensor, obtaining a motion state of the first sensor based on measurement data in the plurality of pieces of measurement data that corresponds to a target reference object, wherein the motion state comprises at least a velocity vector, as drafted, is a device & process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer elements. The claim is practically able to be performed in the mind. For example, but for the “A motion state estimation method, a first sensor, a second sensor, A motion state estimation apparatus, comprising a processor, a memory, and a first sensor, wherein the memory is configured to store program instructions, and the processor is configured to invoke the program instructions, A non-transitory computer readable medium, wherein the non-transitory computer readable medium stores program instructions, and when the program instructions are executed by a processor, the processor is enabled to perform the method, wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object, wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform,” language, “obtaining a plurality of pieces of measurement data, wherein each of the plurality of pieces of measurement data comprises at least velocity measurement information, determining, from the plurality of pieces of measurement data, measurement data corresponding to one or more target reference objects, wherein each target reference object is recognized based on the measurement data of the second sensor, obtaining a motion state of the first sensor based on measurement data in the plurality of pieces of measurement data that corresponds to a target reference object” in the context of this claim encompasses the user discerning and calculating velocity information of a vehicle based on an object relative to their vehicle. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – using “A motion state estimation method, a first sensor, A motion state estimation apparatus, comprising a processor, a memory, and a first sensor, a second sensor wherein the memory is configured to store program instructions, and the processor is configured to invoke the program instructions, A non-transitory computer readable medium, wherein the non-transitory computer readable medium stores program instructions, and when the program instructions are executed by a processor, the processor is enabled to perform the method, wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object, wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform,”. The devices are recited at a high-level of generality (i.e., device configured to detect velocity information of vehicle based on object detection) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using “A motion state estimation method, a first sensor, A motion state estimation apparatus, comprising a processor, a memory, and a first sensor, a second sensor, wherein the memory is configured to store program instructions, and the processor is configured to invoke the program instructions, A non-transitory computer readable medium, wherein the non-transitory computer readable medium stores program instructions, and when the program instructions are executed by a processor, the processor is enabled to perform the method, wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object, wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform,”, amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Similarly for claims 2-4, 6-9, & 12-14, & 16-19, & 21-23, wherein the target reference object is an object that is stationary relative to a reference system, determining, from the plurality of pieces of measurement data based on a feature of the target reference object, the measurement data corresponding to the target reference object, wherein the feature of the target reference object comprises a geometric feature and/or a reflectance feature of the target reference object, mapping a measurement data of the first sensor to a space of the measurement data of the second sensor; mapping the measurement data of the second sensor to a space of the measurement data of the first sensor; or mapping the measurement data of the first sensor and the measurement data of the second sensor to a common space; and determining, by using a space and based on the target reference object determined based on the measurement data of the second sensor, the measurement data in the plurality of pieces of measurement data that corresponds to the target reference object, obtaining the motion state of the first sensor through a least squares (LS) estimation and/or sequential block filtering based on the measurement data in the plurality of pieces of measurement data that corresponds to the target reference object, performing sequential filtering based on M radial velocity vectors corresponding to the target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M≥2, the radial velocity vector comprises K radial velocity measured values in the measurement data in the plurality of pieces of measurement data that corresponds to the target reference object, the corresponding measurement matrix comprises K directional cosine vectors, and K≥1, wherein the motion velocity vector of the first sensor is a two-dimensional vector, K=2, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image1.png 66 470 media_image1.png Greyscale wherein.θ, is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, and i = 1 or 2; or the motion velocity vector of the first sensor is a three-dimensional vector, K=3, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image2.png 86 666 media_image2.png Greyscale wherein, θ is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, φm,i is an ith piece of pitch angle measurement data in the mth group of measurement data of the target reference object, i = 1, 2, or 3, and m = 1, 2,..., or M, is a device & process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, “wherein the target reference object is an object that is stationary relative to a reference system, determining, from the plurality of pieces of measurement data based on a feature of the target reference object, the measurement data corresponding to the target reference object, wherein the feature of the target reference object comprises a geometric feature and/or a reflectance feature of the target reference object, mapping a measurement data of the first sensor to a space of the measurement data of the second sensor; mapping the measurement data of the second sensor to a space of the measurement data of the first sensor; or mapping the measurement data of the first sensor and the measurement data of the second sensor to a common space; and determining, by using a space and based on the target reference object determined based on the measurement data of the second sensor, the measurement data in the plurality of pieces of measurement data that corresponds to the target reference object, obtaining the motion state of the first sensor through a least squares (LS) estimation and/or sequential block filtering based on the measurement data in the plurality of pieces of measurement data that corresponds to the target reference object, performing sequential filtering based on M radial velocity vectors corresponding to the target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M≥2, the radial velocity vector comprises K radial velocity measured values in the measurement data in the plurality of pieces of measurement data that corresponds to the target reference object, the corresponding measurement matrix comprises K directional cosine vectors, and K≥1, wherein the motion velocity vector of the first sensor is a two-dimensional vector, K=2, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image1.png 66 470 media_image1.png Greyscale wherein.θ, is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, and i = 1 or 2; or the motion velocity vector of the first sensor is a three-dimensional vector, K=3, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image2.png 86 666 media_image2.png Greyscale wherein, θ is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, φm,i is an ith piece of pitch angle measurement data in the mth group of measurement data of the target reference object, i = 1, 2, or 3, and m = 1, 2,..., or M,” in the context of this claim encompasses the user calculating velocity through formulas to obtain a motion state. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The devices are recited at a high-level of generality (i.e., device configured to detect velocity information of vehicle based on object detection) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 6-7, 11-14, 16-17, & 20-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0369222A1 (“Oh”), in view of US 2018/0162389A1 (“Minemura”), in view of US 2022/0036043A1 (“Sakashita”). As per claim 1 Oh discloses A motion state estimation method, comprising: obtaining a plurality of pieces of measurement data by using a first sensor, wherein each of the plurality of pieces of measurement data comprises at least velocity measurement information (see at least Oh, para. [0020-0025]: The velocity of target 60 may be represented using a 2D vector, v r, that fulfills the following relation…wherein rRT represents a 2D position vector from radar 40 to target 60, as shown in FIG. 2, and thus ṙ RT represents the Doppler speed of target 60 as measured by radar 40….wherein r represents the radial distance of target 60 as measured by radar 40, ṙ represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.); determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, wherein each target reference object is recognized based on the measurement data of the second sensor (see at least Oh, para. [0030-0035]: For example, in addition to K objects that are stationary, the actual environment may also include P objects that are moving. Radars disposed in or on vehicle 10 or vehicle 50 would detect the (K+P) objects in the surrounding environment of vehicle 10 or vehicle 50, regardless stationary or moving. That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired)… In an embodiment below, a random sample consensus (RANSAC) algorithm is used as an example without an intention to limit the scope of the present disclosure. RANSAC is an iterative method that is able to estimate parameters of a mathematical model from a set of observed data that contains outliers. That is, among measurement data of the (K+P) objects as detected by the N radars, if number of detections from stationary objects are dominant, the detections from moving objects will appear as outliers…); and obtaining a motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects, wherein the motion state comprises at least a velocity vector of the first sensor (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.). However Oh does not explicitly disclose determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data; wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object; and wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform. Minemura teaches determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data (see at least Minemura, para. [0027]: The in-vehicle camera device 12 extracts feature points from the captured front view image. The feature points represent an object to be detected. Specifically, the in-vehicle camera device 12 extracts edge points from the front view image on the basis of brightness information of the front view image, and performs the Hough transform of the extracted edge points to generate the feature points of the object. The in-vehicle camera device 12 capture front view images and extracts those feature points of the object to be detected at the predetermined period of time... & para. [0032-0036]: The object position acquiring part 21 combines position information (first detection information)of the object detected by the radar device 13 with position information (second detection information)of the object detected by the in-vehicle camera device 12, and obtains the position information of the object. Specifically, the object position acquiring part 21 receives the first detection information transmitted from the radar device 13 and obtains the position (as the first position) of the object…The object position acquiring part 21 performs a pattern matching process of the second detection information of the object which is in the FSN state. This pattern matching process uses predetermined patterns.); wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform (see at least Minemura, para. [0027]: The in-vehicle camera device 12 extracts feature points from the captured front view image. The feature points represent an object to be detected. Specifically, the in-vehicle camera device 12extracts edge points from the front view image on the basis of brightness information of the front view image, and performs the Hough transform of the extracted edge points to generate the feature points of the object. The in-vehicle camera device 12 capture front view images and extracts those feature points of the object to be detected at the predetermined period of time. The in-vehicle camera device12 transmits information of the feature points of the object as the position information of the object to be detected to the ECU 20. It is acceptable to use a predetermined period of time which is the same as, or different from the predetermined period of time used by the in-vehicle camera device 12.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data, wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform, of Minemura with a reasonable expectation of success, in order to suppress control of incorrect operation of the safety device with high efficiency (see at least Minemura, para. [0011]). Sakashita teaches wherein a pixel feature of a target reference object is prestored in the second sensor (see at least Sakashita, para. [0127]: Note that data used to generate training data is collected before this processing is started. For example, in a state in which the vehicle 10 is actually traveling, the camera 201 and the millimeter-wave radar 202 provided to the vehicle 10 perform sensing with respect to a region situated ahead of the vehicle 10. Specifically, the camera 201 captures an image of the region situated ahead of the vehicle 10, and stores an obtained captured image in the storage 111. The millimeter-wave radar 202 detects an object situated ahead of the vehicle 10, and stores obtained millimeter-wave data in the storage 111. The training data is generated on the basis of the captured image and millimeter-wave data accumulated in the storage 111.), and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature (see at least Sakashita, para. [0159-0163]: In Step S105, the object recognition section 224 performs processing of recognizing a target object on the basis of the low-resolution image, and the millimeter-wave image on which the geometric transformation has been performed. Specifically, the object recognition section 224 inputs, to the object recognition model 251, input data that includes the low-resolution image, the geometrically transformed signal-intensity image, and the geometrically transformed speed image. The object recognition model 251 performs processing of recognizing a target object situated ahead of the vehicle10 on the basis of the input data.): based on the matching pixel feature being found, determining a location of the target reference object (see at least Sakashita, para. [0141]: As described above, when a geometric transformation is performed on a millimeter-wave image(a signal-intensity image and a speed image), not only the location of an object in the lateral direction and the depth direction, but also the location of the object in the height direction is given. & para. [0176]: a geometric transformation is performed on a millimeter-wave image (a signal-intensity image and a speed image) to obtain an image (a geometrically transformed signal-intensity image and a geometrically transformed speed image) of which a coordinate system has been matched to the coordinate system of a captured image, and the object recognition model 251 is caused to perform learning using the obtained image. This results in facilitating matching of each pixel of the captured image with a reflection point in the millimeter-wave image, and in improving the accuracy in learning.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object of Sakashita with a reasonable expectation of success in order to improve the accuracy in recognizing a target object (see at least Sakashita, para. [0001]). As per claim 2 Oh discloses wherein each target reference object is an object that is stationary relative to a reference system (see at least Oh, para. [0025]: For each stationary object detected by radar 40, any associated equation (8) may be written.). As per claim 3 Oh discloses wherein after obtaining the plurality of pieces of measurement data by using the first sensor, wherein each of the plurality of pieces of measurement data comprises the at least velocity measurement information, and before obtaining the motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects (see at least Oh, para. [0020-0025]: The velocity of target 60 may be represented using a 2D vector, v r, that fulfills the following relation…wherein rRT represents a 2D position vector from radar 40 to target 60, as shown in FIG. 2, and thus ṙ RT represents the Doppler speed of target 60 as measured by radar 40….wherein r represents the radial distance of target 60 as measured by radar 40, ṙ represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.), the method further comprises: determining, from the plurality of pieces of measurement data based on a feature of each target reference object, the measurement data corresponding to the one or more target reference object (see at least Oh, para. [0022]: wherein r represents the radial distance of target 60 as measured by radar 40, r represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.). As per claim 4 Oh discloses wherein the feature of each target reference object comprises a geometric feature and/or a reflectance feature of the target reference object (see at least Oh, para. [0022]: wherein r represents the radial distance of target 60 as measured by radar 40, r represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.). As per claim 6 Oh discloses wherein the determining, from the plurality of pieces of measurement data of the first sensor based on the measurement data of the second sensor (see at least Oh, para. [0030-0035]: For example, in addition to K objects that are stationary, the actual environment may also include P objects that are moving. Radars disposed in or on vehicle 10 or vehicle 50 would detect the (K+P) objects in the surrounding environment of vehicle 10 or vehicle 50, regardless stationary or moving. That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired)…), the measurement data corresponding to the one or more target reference objects comprises: mapping a measurement data of the first sensor to a common space of the measurement data of the second sensor (see at least Oh, para. [0032]: The outliers may be identified visually if measurement data of the (K+P) objects as detected by the N radars are plotted in a mathematical coordinated space. Take the least squares problem represented by equation (17) as an example. For each detection of one of the (K+P) objects by one of the N radars, a corresponding representative point may be plotted in a 3D mathematical space, with the three coordinates of the representative point having values of corresponding elements of matrices X, Y and Z', respectively Specifically, for the k-th object of the (K+P) objects as detected or otherwise measured by then-th radar of the N radars, a representative point may be plotted in the mathematical space to represent the measurement, with each of the coordinates of the representative point being a corresponding element of matrices X, Y and Z', i.e., cos(Bt+cpn), sin(Bt+cpn) andreRxntp sin(Bt+cpn)-rCRYn tp cos(Bt+cpn)+rt); mapping the measurement data of the second sensor to the common space of the measurement data of the first sensor (see at least Oh, para. [0032]: The outliers may be identified visually if measurement data of the (K+P) objects as detected by the N radars are plotted in a mathematical coordinated space. Take the least squares problem represented by equation (17) as an example. For each detection of one of the (K+P) objects by one of the N radars, a corresponding representative point may be plotted in a 3D mathematical space, with the three coordinates of the representative point having values of corresponding elements of matrices X, Y and Z', respectively Specifically, for the k-th object of the (K+P) objects as detected or otherwise measured by then-th radar of the N radars, a representative point may be plotted in the mathematical space to represent the measurement, with each of the coordinates of the representative point being a corresponding element of matrices X, Y and Z', i.e., cos(Bt+cpn), sin(Bt+cpn) andreRxntp sin(Bt+cpn)-rCRYn tp cos(Bt+cpn)+rt);; or mapping the measurement data of the first sensor and the measurement data of the second sensor to the common space (see at least Oh, para. [0032]: The outliers may be identified visually if measurement data of the (K+P) objects as detected by the N radars are plotted in a mathematical coordinated space. Take the least squares problem represented by equation (17) as an example. For each detection of one of the (K+P) objects by one of the N radars, a corresponding representative point may be plotted in a 3D mathematical space, with the three coordinates of the representative point having values of corresponding elements of matrices X, Y and Z', respectively Specifically, for the k-th object of the (K+P) objects as detected or otherwise measured by the nth radar of the N radars, a representative point may be plotted in the mathematical space to represent the measurement, with each of the coordinates of the representative point being a corresponding element of matrices X, Y and Z', i.e., cos(Bt+cpn), sin(Bt+cpn) andreRxntp sin(Bt+cpn)-rCRYn tp cos(Bt+cpn)+rt);; and determining, by using the common space and based on the one or more target reference objects determined based on the measurement data of the second sensor, the measurement data that corresponds to the one or more target reference object (see at least Oh, para. [0033]: As shown in FIG. 3, representative points of measurement data detecting stationary objects are largely located on a 2D plane 320 (referred as the "best-fit plane"), or near plane 320 within a predetermined vicinity, as the coordinates of those representative points fulfill equation (9) or, equivalently, equation (17). These representative points that are on or near the best-fit plane 320 are referred as "inliers" and are deemed by the RANSAC algorithm as representing measurement data from stationary objects in the surrounding environment of vehicle 10. The inliers are used to construct or otherwise form a least squares problem as represented by equation (17), and the dynamic variables of vehicle 10 in the least squares problem, i.e., linear velocities v e and v e, may be obtained accordingly by solving equati5n (17). ). As per claim 7 Oh discloses wherein the obtaining the motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects comprises (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.): obtaining the motion state of the first sensor through a least squares (LS) estimation and/or sequential block filtering based on the measurement data corresponding to the one or more target reference objects (see at least Oh, para. [0030]: That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired). Whereas measurement data from stationary objects are useful in constructing a correct least squares problem (e.g., as shown in equations 9, 10 or 15) which can be solved to obtain the dynamic variables of vehicle 10 or vehicle 50, measurement data from moving objects are undesired as it would disturb or otherwise skew the least square problem, which leads to inaccurate estimates of the dynamic variables of vehicle 10 or vehicle 50 when solving the least square problem.). As per claim 11 Oh discloses A motion state estimation apparatus, comprising a processor, a memory, and a first sensor, wherein the memory is configured to store program instructions, and the processor is configured to invoke the program instructions to perform the following operations (see at least Oh, para. [0067]: Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices).): obtaining a plurality of pieces of measurement data by using the first sensor, wherein each of the plurality of pieces of measurement data comprises at least velocity measurement information (see at least Oh, para. [0020-0025]: The velocity of target 60 may be represented using a 2D vector, v r, that fulfills the following relation…wherein rRT represents a 2D position vector from radar 40 to target 60, as shown in FIG. 2, and thus ṙ RT represents the Doppler speed of target 60 as measured by radar 40….wherein r represents the radial distance of target 60 as measured by radar 40, ṙ represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.); determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, wherein each target reference object is recognized based on the measurement data of the second sensor (see at least Oh, para. [0030-0035]: For example, in addition to K objects that are stationary, the actual environment may also include P objects that are moving. Radars disposed in or on vehicle 10 or vehicle 50 would detect the (K+P) objects in the surrounding environment of vehicle 10 or vehicle 50, regardless stationary or moving. That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired)… In an embodiment below, a random sample consensus (RANSAC) algorithm is used as an example without an intention to limit the scope of the present disclosure. RANSAC is an iterative method that is able to estimate parameters of a mathematical model from a set of observed data that contains outliers. That is, among measurement data of the (K+P) objects as detected by the N radars, if number of detections from stationary objects are dominant, the detections from moving objects will appear as outliers…);and obtaining a motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects, wherein the motion state comprises at least a velocity vector of the first sensor (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.). However Oh does not explicitly disclose determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data; wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object; and wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform. Minemura teaches determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data (see at least Minemura, para. [0027]: The in-vehicle camera device 12 extracts feature points from the captured front view image. The feature points represent an object to be detected. Specifically, the in-vehicle camera device 12 extracts edge points from the front view image on the basis of brightness information of the front view image, and performs the Hough transform of the extracted edge points to generate the feature points of the object. The in-vehicle camera device 12 capture front view images and extracts those feature points of the object to be detected at the predetermined period of time... & para. [0032-0036]: The object position acquiring part 21 combines position information (first detection information)of the object detected by the radar device 13 with position information (second detection information)of the object detected by the in-vehicle camera device 12, and obtains the position information of the object. Specifically, the object position acquiring part 21 receives the first detection information transmitted from the radar device 13 and obtains the position (as the first position) of the object…The object position acquiring part 21 performs a pattern matching process of the second detection information of the object which is in the FSN state. This pattern matching process uses predetermined patterns.); wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform (see at least Minemura, para. [0027]: The in-vehicle camera device 12 extracts feature points from the captured front view image. The feature points represent an object to be detected. Specifically, the in-vehicle camera device 12extracts edge points from the front view image on the basis of brightness information of the front view image, and performs the Hough transform of the extracted edge points to generate the feature points of the object. The in-vehicle camera device 12 capture front view images and extracts those feature points of the object to be detected at the predetermined period of time. The in-vehicle camera device12 transmits information of the feature points of the object as the position information of the object to be detected to the ECU 20. It is acceptable to use a predetermined period of time which is the same as, or different from the predetermined period of time used by the in-vehicle camera device 12.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data, wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform, of Minemura with a reasonable expectation of success, in order to suppress control of incorrect operation of the safety device with high efficiency (see at least Minemura, para. [0011]). Sakashita teaches wherein a pixel feature of a target reference object is prestored in the second sensor (see at least Sakashita, para. [0127]: Note that data used to generate training data is collected before this processing is started. For example, in a state in which the vehicle 10 is actually traveling, the camera 201 and the millimeter-wave radar 202 provided to the vehicle 10 perform sensing with respect to a region situated ahead of the vehicle 10. Specifically, the camera 201 captures an image of the region situated ahead of the vehicle 10, and stores an obtained captured image in the storage 111. The millimeter-wave radar 202 detects an object situated ahead of the vehicle 10, and stores obtained millimeter-wave data in the storage 111. The training data is generated on the basis of the captured image and millimeter-wave data accumulated in the storage 111.), and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature (see at least Sakashita, para. [0159-0163]: In Step S105, the object recognition section 224 performs processing of recognizing a target object on the basis of the low-resolution image, and the millimeter-wave image on which the geometric transformation has been performed. Specifically, the object recognition section 224 inputs, to the object recognition model 251, input data that includes the low-resolution image, the geometrically transformed signal-intensity image, and the geometrically transformed speed image. The object recognition model 251 performs processing of recognizing a target object situated ahead of the vehicle10 on the basis of the input data.): based on the matching pixel feature being found, determining a location of the target reference object (see at least Sakashita, para. [0141]: As described above, when a geometric transformation is performed on a millimeter-wave image(a signal-intensity image and a speed image), not only the location of an object in the lateral direction and the depth direction, but also the location of the object in the height direction is given. & para. [0176]: a geometric transformation is performed on a millimeter-wave image (a signal-intensity image and a speed image) to obtain an image (a geometrically transformed signal-intensity image and a geometrically transformed speed image) of which a coordinate system has been matched to the coordinate system of a captured image, and the object recognition model 251 is caused to perform learning using the obtained image. This results in facilitating matching of each pixel of the captured image with a reflection point in the millimeter-wave image, and in improving the accuracy in learning.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object of Sakashita with a reasonable expectation of success in order to improve the accuracy in recognizing a target object (see at least Sakashita, para. [0001]). As per claim 12 Oh discloses wherein each target reference object is an object that is stationary relative to a reference system (see at least Oh, para. [0025]: For each stationary object detected by radar 40, any associated equation (8) may be written.). As per claim 13 Oh discloses wherein after obtaining the plurality of pieces of measurement data by using a first sensor, wherein each of the plurality of pieces of measurement data comprises the at least velocity measurement information, and before obtaining the motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects(see at least Oh, para. [0020-0025]: The velocity of target 60 may be represented using a 2D vector, v r, that fulfills the following relation…wherein rRT represents a 2D position vector from radar 40 to target 60, as shown in FIG. 2, and thus ṙ RT represents the Doppler speed of target 60 as measured by radar 40….wherein r represents the radial distance of target 60 as measured by radar 40, ṙ represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.), the method further comprises: determining, from the plurality of pieces of measurement data based on a feature of each target reference object, the measurement data corresponding to the one or more target reference object (see at least Oh, para. [0022]: wherein r represents the radial distance of target 60 as measured by radar 40, r represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.). As per claim 14 Oh discloses wherein the feature of each target reference object comprises a geometric feature and/or a reflectance feature of the target reference object (see at least Oh, para. [0022]: wherein r represents the radial distance of target 60 as measured by radar 40, r represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.). As per claim 16 Oh discloses wherein the determining, from the plurality of pieces of measurement data of the first sensor based on the measurement data of the second sensor (see at least Oh, para. [0030-0035]: For example, in addition to K objects that are stationary, the actual environment may also include P objects that are moving. Radars disposed in or on vehicle 10 or vehicle 50 would detect the (K+P) objects in the surrounding environment of vehicle 10 or vehicle 50, regardless stationary or moving. That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired)…), the measurement data corresponding to the one or more target reference object comprises: mapping a measurement data of the first sensor to a common space of the measurement data of the second sensor (see at least Oh, para. [0032]: The outliers may be identified visually if measurement data of the (K+P) objects as detected by the N radars are plotted in a mathematical coordinated space. Take the least squares problem represented by equation (17) as an example. For each detection of one of the (K+P) objects by one of the N radars, a corresponding representative point may be plotted in a 3D mathematical space, with the three coordinates of the representative point having values of corresponding elements of matrices X, Y and Z', respectively Specifically, for the k-th object of the (K+P) objects as detected or otherwise measured by then-th radar of the N radars, a representative point may be plotted in the mathematical space to represent the measurement, with each of the coordinates of the representative point being a corresponding element of matrices X, Y and Z', i.e., cos(Bt+cpn), sin(Bt+cpn) andreRxntp sin(Bt+cpn)-rCRYn tp cos(Bt+cpn)+rt); mapping the measurement data of the second sensor to the common space of the measurement data of the first sensor (see at least Oh, para. [0032]: The outliers may be identified visually if measurement data of the (K+P) objects as detected by the N radars are plotted in a mathematical coordinated space. Take the least squares problem represented by equation (17) as an example. For each detection of one of the (K+P) objects by one of the N radars, a corresponding representative point may be plotted in a 3D mathematical space, with the three coordinates of the representative point having values of corresponding elements of matrices X, Y and Z', respectively Specifically, for the k-th object of the (K+P) objects as detected or otherwise measured by then-th radar of the N radars, a representative point may be plotted in the mathematical space to represent the measurement, with each of the coordinates of the representative point being a corresponding element of matrices X, Y and Z', i.e., cos(Bt+cpn), sin(Bt+cpn) andreRxntp sin(Bt+cpn)-rCRYn tp cos(Bt+cpn)+rt);; or mapping the measurement data of the first sensor and the measurement data of the second sensor to the common space (see at least Oh, para. [0032]: The outliers may be identified visually if measurement data of the (K+P) objects as detected by the N radars are plotted in a mathematical coordinated space. Take the least squares problem represented by equation (17) as an example. For each detection of one of the (K+P) objects by one of the N radars, a corresponding representative point may be plotted in a 3D mathematical space, with the three coordinates of the representative point having values of corresponding elements of matrices X, Y and Z', respectively Specifically, for the k-th object of the (K+P) objects as detected or otherwise measured by the nth radar of the N radars, a representative point may be plotted in the mathematical space to represent the measurement, with each of the coordinates of the representative point being a corresponding element of matrices X, Y and Z', i.e., cos(Bt+cpn), sin(Bt+cpn) andreRxntp sin(Bt+cpn)-rCRYn tp cos(Bt+cpn)+rt);; and determining, by using the common space and based on the one or more target reference objects determined based on the measurement data of the second sensor, the measurement data that corresponds to the one or more target reference objects (see at least Oh, para. [0033]: As shown in FIG. 3, representative points of measurement data detecting stationary objects are largely located on a 2D plane 320 (referred as the "best-fit plane"), or near plane 320 within a predetermined vicinity, as the coordinates of those representative points fulfill equation (9) or, equivalently, equation (17). These representative points that are on or near the best-fit plane 320 are referred as "inliers" and are deemed by the RANSAC algorithm as representing measurement data from stationary objects in the surrounding environment of vehicle 10. The inliers are used to construct or otherwise form a least squares problem as represented by equation (17), and the dynamic variables of vehicle 10 in the least squares problem, i.e., linear velocities v e and v e, may be obtained accordingly by solving equati5n (17). ). As per claim 17 Oh discloses wherein the obtaining the motion state of the first sensor based on the measurement data that corresponds to the one or more target reference objects comprises (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.): obtaining the motion state of the first sensor through a least squares (LS) estimation and/or sequential block filtering based on the measurement data that corresponds to the target reference object (see at least Oh, para. [0030]: That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired). Whereas measurement data from stationary objects are useful in constructing a correct least squares problem (e.g., as shown in equations 9, 10 or 15) which can be solved to obtain the dynamic variables of vehicle 10 or vehicle 50, measurement data from moving objects are undesired as it would disturb or otherwise skew the least square problem, which leads to inaccurate estimates of the dynamic variables of vehicle 10 or vehicle 50 when solving the least square problem.). As per claim 20 Oh discloses A non-transitory computer readable medium, wherein the non-transitory computer readable medium stores program instructions, and when the program instructions are executed by a processor (see at least Oh, para. [0067]: Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices).), the processor is enabled to perform the method of: obtaining a plurality of pieces of measurement data by using a first sensor, wherein each of the plurality of pieces of measurement data comprises at least velocity measurement information; (see at least Oh, para. [0020-0025]: The velocity of target 60 may be represented using a 2D vector, v r, that fulfills the following relation…wherein rRT represents a 2D position vector from radar 40 to target 60, as shown in FIG. 2, and thus ṙ RT represents the Doppler speed of target 60 as measured by radar 40….wherein r represents the radial distance of target 60 as measured by radar 40, ṙ represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.); determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, wherein each target reference object is recognized based on the measurement data of the second sensor (see at least Oh, para. [0030-0035]: For example, in addition to K objects that are stationary, the actual environment may also include P objects that are moving. Radars disposed in or on vehicle 10 or vehicle 50 would detect the (K+P) objects in the surrounding environment of vehicle 10 or vehicle 50, regardless stationary or moving. That is, measurement data ( e.g., radial distance, radial speed and azimuth angle of detected objects) obtained by the N radars may include data from both the K stationary objects (which is desired) and the P moving objects (which is undesired)… In an embodiment below, a random sample consensus (RANSAC) algorithm is used as an example without an intention to limit the scope of the present disclosure. RANSAC is an iterative method that is able to estimate parameters of a mathematical model from a set of observed data that contains outliers. That is, among measurement data of the (K+P) objects as detected by the N radars, if number of detections from stationary objects are dominant, the detections from moving objects will appear as outliers…); and obtaining a motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects, wherein the motion state comprises at least a velocity vector of the first sensor (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.). However Oh does not explicitly disclose determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data; wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object; and wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform. Minemura teaches determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data (see at least Minemura, para. [0027]: The in-vehicle camera device 12 extracts feature points from the captured front view image. The feature points represent an object to be detected. Specifically, the in-vehicle camera device 12 extracts edge points from the front view image on the basis of brightness information of the front view image, and performs the Hough transform of the extracted edge points to generate the feature points of the object. The in-vehicle camera device 12 capture front view images and extracts those feature points of the object to be detected at the predetermined period of time... & para. [0032-0036]: The object position acquiring part 21 combines position information (first detection information)of the object detected by the radar device 13 with position information (second detection information)of the object detected by the in-vehicle camera device 12, and obtains the position information of the object. Specifically, the object position acquiring part 21 receives the first detection information transmitted from the radar device 13 and obtains the position (as the first position) of the object…The object position acquiring part 21 performs a pattern matching process of the second detection information of the object which is in the FSN state. This pattern matching process uses predetermined patterns.); wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform (see at least Minemura, para. [0027]: The in-vehicle camera device 12 extracts feature points from the captured front view image. The feature points represent an object to be detected. Specifically, the in-vehicle camera device 12extracts edge points from the front view image on the basis of brightness information of the front view image, and performs the Hough transform of the extracted edge points to generate the feature points of the object. The in-vehicle camera device 12 capture front view images and extracts those feature points of the object to be detected at the predetermined period of time. The in-vehicle camera device12 transmits information of the feature points of the object as the position information of the object to be detected to the ECU 20. It is acceptable to use a predetermined period of time which is the same as, or different from the predetermined period of time used by the in-vehicle camera device 12.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of determining, from the plurality of pieces of measurement data of the first sensor based on measurement data of a second sensor, measurement data corresponding to one or more target reference objects, including separating the measurement data corresponding to one or more target reference objects from the plurality of pieces of measurement data, wherein the measurement data corresponding to the one or more target reference objects is separated from the plurality of pieces of measurement data by feature recognition via Hough transform, of Minemura with a reasonable expectation of success, in order to suppress control of incorrect operation of the safety device with high efficiency (see at least Minemura, para. [0011]). Sakashita teaches wherein a pixel feature of a target reference object is prestored in the second sensor (see at least Sakashita, para. [0127]: Note that data used to generate training data is collected before this processing is started. For example, in a state in which the vehicle 10 is actually traveling, the camera 201 and the millimeter-wave radar 202 provided to the vehicle 10 perform sensing with respect to a region situated ahead of the vehicle 10. Specifically, the camera 201 captures an image of the region situated ahead of the vehicle 10, and stores an obtained captured image in the storage 111. The millimeter-wave radar 202 detects an object situated ahead of the vehicle 10, and stores obtained millimeter-wave data in the storage 111. The training data is generated on the basis of the captured image and millimeter-wave data accumulated in the storage 111.), and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature (see at least Sakashita, para. [0159-0163]: In Step S105, the object recognition section 224 performs processing of recognizing a target object on the basis of the low-resolution image, and the millimeter-wave image on which the geometric transformation has been performed. Specifically, the object recognition section 224 inputs, to the object recognition model 251, input data that includes the low-resolution image, the geometrically transformed signal-intensity image, and the geometrically transformed speed image. The object recognition model 251 performs processing of recognizing a target object situated ahead of the vehicle10 on the basis of the input data.): based on the matching pixel feature being found, determining a location of the target reference object (see at least Sakashita, para. [0141]: As described above, when a geometric transformation is performed on a millimeter-wave image(a signal-intensity image and a speed image), not only the location of an object in the lateral direction and the depth direction, but also the location of the object in the height direction is given. & para. [0176]: a geometric transformation is performed on a millimeter-wave image (a signal-intensity image and a speed image) to obtain an image (a geometrically transformed signal-intensity image and a geometrically transformed speed image) of which a coordinate system has been matched to the coordinate system of a captured image, and the object recognition model 251 is caused to perform learning using the obtained image. This results in facilitating matching of each pixel of the captured image with a reflection point in the millimeter-wave image, and in improving the accuracy in learning.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of wherein a pixel feature of a target reference object is prestored in the second sensor, and the method further comprises: searching the measurement data of the second sensor for a matching pixel feature that matches the prestored pixel feature: based on the matching pixel feature being found, determining a location of the target reference object of Sakashita with a reasonable expectation of success in order to improve the accuracy in recognizing a target object (see at least Sakashita, para. [0001]). As per claim 21 Oh discloses wherein each target reference object is an object that is stationary relative to a reference system (see at least Oh, para. [0025]: For each stationary object detected by radar 40, any associated equation (8) may be written.). As per claim 22 Oh discloses wherein after obtaining the plurality of pieces of measurement data by using a first sensor, wherein each of the plurality of pieces of measurement data comprises the at least velocity measurement information, and before obtaining the motion state of the first sensor based on the measurement data corresponding to the one or more target reference objects (see at least Oh, para. [0020-0025]: The velocity of target 60 may be represented using a 2D vector, v r, that fulfills the following relation…wherein rRT represents a 2D position vector from radar 40 to target 60, as shown in FIG. 2, and thus ṙ RT represents the Doppler speed of target 60 as measured by radar 40….wherein r represents the radial distance of target 60 as measured by radar 40, ṙ represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.), the method further comprises: determining, from the plurality of pieces of measurement data based on a feature of each target reference object, the measurement data corresponding to the one or more target reference object (see at least Oh, para. [0022]: wherein r represents the radial distance of target 60 as measured by radar 40, r represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.). As per claim 23 Oh discloses wherein the feature of each target reference object comprises a geometric feature and/or a reflectance feature of the target reference object (see at least Oh, para. [0022]: wherein r represents the radial distance of target 60 as measured by radar 40, r represents the radial speed of target 60 as measured by radar 40, and 8, also shown in FIG. 2, represents the azimuth angle of target 60 as measured by radar 40.). Claim(s) 8-9, & 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oh, in view of Minemura, in view of Sakashita, in view of US 2012/0086596A1 (“Insanic”). As per claim 8 Oh discloses wherein the obtaining the motion state of the first sensor through the least squares (LS) estimation and/or sequential block filtering based on the measurement data corresponding to the one or more target reference objects (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.). However Oh does not explicitly disclose performing the sequential block filtering based on M radial velocity vectors corresponding to the one or more target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M>2, the radial velocity vector comprises K radial velocity measured values in the measurement data corresponding to the one or more target reference objects, the corresponding measurement matrix comprises K directional cosine vectors, and K>1. Insanic teaches performing the sequential block filtering based on M radial velocity vectors corresponding to the one or more target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M>2, the radial velocity vector comprises K radial velocity measured values in the measurement data corresponding to the one or more target reference objects, the corresponding measurement matrix comprises K directional cosine vectors, and K>1 (see at least Insanic, para. [0105]: To clearly demonstrate dual-Doppler velocity retrieval, FIG. 14 is presented, where two observation points are employed to detect a point target moving at velocity v.sub.T =v.sub.x {circumflex over (x)}+v.sub.yy, where the velocity in the {circumflex over (z)} dimension is discarded for the purpose of illustration. By use of basic trigonometric transformations, the following equation for the observed radial velocity for two nodes in a network can be written as [ v r 1 v r 2 ] = [ cos ( .phi. 1 ) sin ( .phi. 1 ) cos ( .phi. 2 ) sin ( .phi. 2 ) ] [ v x v y ] ( 16 ) ##EQU00012## or, more concisely .upsilon..sub.R=A .upsilon..sub.T (17) yielding a measured radial to target vector velocity relationship .upsilon..sub.R=A.sub.-1 .upsilon..sub.R (18) Note in Eq. 16, that if .phi..sub.1=.+-..phi..sub.2, which is the case when the target lies in the line of sight between the observation points, the matrix A becomes singular and A.sup.-1 cannot be determined. This degenerate case reflects the physical situation where the target and radars are in line with one another, resulting in a one-dimensional geometry instead of the two required for vector velocity retrieval.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of performing the sequential block filtering based on M radial velocity vectors corresponding to the one or more target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M>2, the radial velocity vector comprises K radial velocity measured values in the measurement data corresponding to the one or more target reference objects, the corresponding measurement matrix comprises K directional cosine vectors, and K>1of Insanic with a reasonable expectation of success to computer 3D velocity data from a plurality of three or more radar nodes scanning the same event within the required scan interval (see at least Insanic, para. [0011]). As per claim 9 Oh does not explicitly disclose wherein the motion velocity vector of the first sensor is a two-dimensional vector, K=2, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image1.png 66 470 media_image1.png Greyscale wherein.θ, is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, and i = 1 or 2; or the motion velocity vector of the first sensor is a three-dimensional vector, K=3, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image2.png 86 666 media_image2.png Greyscale wherein, θ is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, φm,i is an ith piece of pitch angle measurement data in the mth group of measurement data of the target reference object, i = 1, 2, or 3, and m = 1, 2,..., or M. Insanic teaches wherein the motion velocity vector of the first sensor is a two-dimensional vector, K=2, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image1.png 66 470 media_image1.png Greyscale wherein.θ, is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, and i = 1 or 2; or the motion velocity vector of the first sensor is a three-dimensional vector, K=3, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image3.png 103 796 media_image3.png Greyscale piece of azimuth measurement data in an m group of measurement data of the target reference object, φm,i is an ith piece of pitch angle measurement data in the mth group of measurement data of the target reference object, i = 1, 2, or 3, and m = 1, 2,..., or M (see at least Insanic, para. [0105-0106]: To clearly demonstrate dual-Doppler velocity retrieval, FIG. 14 is presented, where two observation points are employed to detect a point target moving at velocity v.sub.T =v.sub.x {circumflex over (x)}+v.sub.yy, where the velocity in the {circumflex over (z)} dimension is discarded for the purpose of illustration. By use of basic trigonometric transformations, the following equation for the observed radial velocity for two nodes in a network can be written as [ v r 1 v r 2 ] = [ cos ( .phi. 1 ) sin ( .phi. 1 ) cos ( .phi. 2 ) sin ( .phi. 2 ) ] [ v x v y ] ( 16 ) ##EQU00012## or, more concisely .upsilon..sub.R=A .upsilon..sub.T (17) yielding a measured radial to target vector velocity relationship .upsilon..sub.R=A.sub.-1 .upsilon..sub.R (18) Note in Eq. 16, that if .phi..sub.1=.+-..phi..sub.2, which is the case when the target lies in the line of sight between the observation points, the matrix A becomes singular and A.sup.-1 cannot be determined. This degenerate case reflects the physical situation where the target and radars are in line with one another, resulting in a one-dimensional geometry instead of the two required for vector velocity retrieval….FIG. 15 extends the 2D case from the previous example to illustrate the geometry for multiple radar nodes observing a point target moving at the velocity v.sub.T in a three-dimensional environment. Similar to the 2D case, this can be extended to accommodate any number of observing points. Additionally, one can include an arbitrary error, .epsilon..sub.n, per measurement, per node, as in [ v r 1 v r 2 v rN ] [ cos ( .phi. 1 ) cos ( .theta. 1 ) sin ( .phi. 1 ) cos ( .theta. 1 ) sin ( .theta. 1 ) cos ( .phi. 2 ) cos ( .theta. 2 ) sin ( .phi. 2 ) cos ( .theta. 2 ) sin ( .theta. 2 ) cos ( .phi. N ) cos ( .theta. N ) sin ( .phi. N ) cos ( .theta. N ) sin ( .theta. N ) ] ( 20 ) ##EQU00014##). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of Insanic with a reasonable expectation of success to computer 3D velocity data from a plurality of three or more radar nodes scanning the same event within the required scan interval (see at least Insanic, para. [0011]). As per claim 18 Oh discloses wherein the obtaining the motion state of the first sensor through the least squares (LS) estimation and/or sequential block filtering based on the measurement data in the plurality of pieces of measurement data that corresponds to the one or more target reference objects (see at least Oh, para. [0025]: Coefficients of equation (8) includes radar measurements of target 60 by radar 40 (i.e., ṙ and theta), as well as installation parameters of radar 40 (i.e., wherein rCR, reR and theta). For each stationary object detected by radar 40, any associated equation (8) may be written. Therefore, when multiple stationary objects are detected by radar 40, a set of equations (8) each associated with one of the stationary objects may be generated. The set of equations (8), each being a first order linear equation of vehicle 10 linear and angular velocities, may thus form a least squares problem in mathematical sense.). However Oh does not explicitly disclose performing the sequential block filtering based on M radial velocity vectors corresponding to the one or more target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M>2, the radial velocity vector comprises K radial velocity measured values in the measurement data corresponding to the one or more target reference objects, the corresponding measurement matrix comprises K directional cosine vectors, and K>1. Insanic teaches performing the sequential block filtering based on M radial velocity vectors corresponding to the one or more target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M>2, the radial velocity vector comprises K radial velocity measured values in the measurement data corresponding to the one or more target reference objects, the corresponding measurement matrix comprises K directional cosine vectors, and K>1 (see at least Insanic, para. [0105]: To clearly demonstrate dual-Doppler velocity retrieval, FIG. 14 is presented, where two observation points are employed to detect a point target moving at velocity v.sub.T =v.sub.x {circumflex over (x)}+v.sub.yy, where the velocity in the {circumflex over (z)} dimension is discarded for the purpose of illustration. By use of basic trigonometric transformations, the following equation for the observed radial velocity for two nodes in a network can be written as [ v r 1 v r 2 ] = [ cos ( .phi. 1 ) sin ( .phi. 1 ) cos ( .phi. 2 ) sin ( .phi. 2 ) ] [ v x v y ] ( 16 ) ##EQU00012## or, more concisely .upsilon..sub.R=A .upsilon..sub.T (17) yielding a measured radial to target vector velocity relationship .upsilon..sub.R=A.sub.-1 .upsilon..sub.R (18) Note in Eq. 16, that if .phi..sub.1=.+-..phi..sub.2, which is the case when the target lies in the line of sight between the observation points, the matrix A becomes singular and A.sup.-1 cannot be determined. This degenerate case reflects the physical situation where the target and radars are in line with one another, resulting in a one-dimensional geometry instead of the two required for vector velocity retrieval.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of performing the sequential block filtering based on M radial velocity vectors corresponding to the one or more target reference object and measurement matrices corresponding to the M radial velocity vectors, to obtain a motion estimate of the first sensor, wherein M>2, the radial velocity vector comprises K radial velocity measured values in the measurement data corresponding to the one or more target reference objects, the corresponding measurement matrix comprises K directional cosine vectors, and K>1 of Insanic with a reasonable expectation of success to computer 3D velocity data from a plurality of three or more radar nodes scanning the same event within the required scan interval (see at least Insanic, para. [0011]). As per claim 19 Oh does not explicitly disclose wherein the motion velocity vector of the first sensor is a two-dimensional vector, K=2, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image1.png 66 470 media_image1.png Greyscale wherein.θ, is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, and i = 1 or 2; or the motion velocity vector of the first sensor is a three-dimensional vector, K=3, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image2.png 86 666 media_image2.png Greyscale wherein, θ is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, φm,i is an ith piece of pitch angle measurement data in the mth group of measurement data of the target reference object, i = 1, 2, or 3, and m = 1, 2,..., or M. Insanic teaches wherein the motion velocity vector of the first sensor is a two-dimensional vector, K=2, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image1.png 66 470 media_image1.png Greyscale wherein.θ, is an ith piece of azimuth measurement data in an m group of measurement data of the target reference object, and i = 1 or 2; or the motion velocity vector of the first sensor is a three-dimensional vector, K=3, and the measurement matrix corresponding to the radial velocity vector is: PNG media_image3.png 103 796 media_image3.png Greyscale piece of azimuth measurement data in an m group of measurement data of the target reference object, φm,i is an ith piece of pitch angle measurement data in the mth group of measurement data of the target reference object, i = 1, 2, or 3, and m = 1, 2,..., or M (see at least Insanic, para. [0105-0106]: To clearly demonstrate dual-Doppler velocity retrieval, FIG. 14 is presented, where two observation points are employed to detect a point target moving at velocity v.sub.T =v.sub.x {circumflex over (x)}+v.sub.yy, where the velocity in the {circumflex over (z)} dimension is discarded for the purpose of illustration. By use of basic trigonometric transformations, the following equation for the observed radial velocity for two nodes in a network can be written as [ v r 1 v r 2 ] = [ cos ( .phi. 1 ) sin ( .phi. 1 ) cos ( .phi. 2 ) sin ( .phi. 2 ) ] [ v x v y ] ( 16 ) ##EQU00012## or, more concisely .upsilon..sub.R=A .upsilon..sub.T (17) yielding a measured radial to target vector velocity relationship .upsilon..sub.R=A.sub.-1 .upsilon..sub.R (18) Note in Eq. 16, that if .phi..sub.1=.+-..phi..sub.2, which is the case when the target lies in the line of sight between the observation points, the matrix A becomes singular and A.sup.-1 cannot be determined. This degenerate case reflects the physical situation where the target and radars are in line with one another, resulting in a one-dimensional geometry instead of the two required for vector velocity retrieval….FIG. 15 extends the 2D case from the previous example to illustrate the geometry for multiple radar nodes observing a point target moving at the velocity v.sub.T in a three-dimensional environment. Similar to the 2D case, this can be extended to accommodate any number of observing points. Additionally, one can include an arbitrary error, .epsilon..sub.n, per measurement, per node, as in [ v r 1 v r 2 v rN ] [ cos ( .phi. 1 ) cos ( .theta. 1 ) sin ( .phi. 1 ) cos (.theta. 1) sin (.theta. 1) cos ( .phi. 2 ) cos ( .theta. 2 ) sin ( .phi. 2 ) cos ( .theta. 2 ) sin ( .theta. 2 ) cos ( .phi. N ) cos ( .theta. N ) sin ( .phi. N ) cos ( .theta. N ) sin ( .theta. N ) ] ( 20 ) ##EQU00014##). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oh to incorporate the teaching of Insanic with a reasonable expectation of success to computer 3D velocity data from a plurality of three or more radar nodes scanning the same event within the required scan interval (see at least Insanic, para. [0011]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABDO ALGEHAIM whose telephone number is (571)272-3628. The examiner can normally be reached Monday-Friday 8-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABDO ALGEHAIM/Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Dec 06, 2021
Application Filed
Sep 28, 2024
Non-Final Rejection — §101, §103
Dec 26, 2024
Response Filed
Apr 19, 2025
Final Rejection — §101, §103
Jun 11, 2025
Response after Non-Final Action
Jun 30, 2025
Request for Continued Examination
Jul 03, 2025
Response after Non-Final Action
Aug 09, 2025
Non-Final Rejection — §101, §103
Oct 28, 2025
Response Filed
Jan 24, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594963
DETECTING AN UNKNOWN OBJECT BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
2y 5m to grant Granted Apr 07, 2026
Patent 12597865
INVERTER
2y 5m to grant Granted Apr 07, 2026
Patent 12589978
TRUCK-TABLET INTERFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12565235
DETECTING A CONSTRUCTION ZONE BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
2y 5m to grant Granted Mar 03, 2026
Patent 12559228
THERMAL MANAGEMENT SYSTEM FOR AN AIRCRAFT INCLUDING AN ELECTRIC PROPULSION ENGINE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
59%
Grant Probability
81%
With Interview (+21.9%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 207 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month