Prosecution Insights
Last updated: April 19, 2026
Application No. 18/334,115

ERROR MITIGATION TECHNIQUES FOR DEPENDENT SENSOR SIGNALS

Non-Final OA §101§103§112
Filed
Jun 13, 2023
Examiner
AIELLO, JEFFREY P
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Torc Robotics, Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
461 granted / 599 resolved
+9.0% vs TC avg
Strong +24% interview lift
Without
With
+24.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
617
Total Applications
across all art units

Statute-Specific Performance

§101
35.7%
-4.3% vs TC avg
§103
34.5%
-5.5% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 599 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings filed on 06/13/2023 are accepted. Claim Rejections - 35 USC § 112(b) 35 U.S.C. 112 reads as follows: (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-13 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant, regards as the invention. Regarding claim 1, line 8, of claim 1 recites “…LiDAR data of the sensor data.” This is the first recitation of “sensor data” in the claim(s). Thus, there is insufficient antecedent basis for this limitation(s) in the claim. Appropriate correction is required. Additionally regarding claim 1, line 9, of claim 1 recites “…a LiDAR sensor of the plurality of sensors.” This is the first recitation of “plurality of sensors” in the claim(s). Thus, there is insufficient antecedent basis for this limitation(s) in the claim. Appropriate correction is required. Regarding claims 2-13, claims 2-13 are rejected under 35 U.S.C. 112(b), second paragraph, due to their dependency from a rejected base claim(s). Appropriate correction is required. Claim Rejections - 35 USC § 101 Non-Statutory 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, Claim 1 recites: A method for geo-denied localization for an automated vehicle, the method comprising: detecting, by a processor of an automated vehicle, a geo-denied state of the automated vehicle based upon geo-location data from a geo-location device of the automated vehicle; invoking, by the processor, programming of a localization loop of a map localizer and a motion estimator in response to the processor detecting the geo-denied state; during execution of a first iteration of the localization loop in the geo-denied state: generating, by the processor, an estimated location of the automated vehicle by applying the map localizer on LiDAR data of the sensor data obtained for the first iteration from a LiDAR sensor of the plurality of sensors; and generating, by the processor, an estimated motion of the automated vehicle by applying the motion estimator on the estimated location from the map localizer and on the sensor data obtained for the first iteration. The claim limitations in the abstract idea have been highlighted in bold; the remaining limitations are “additional elements.” Similar limitations comprise the abstract ideas of claim 14. Under Step 1 of the analysis, claim 1 does belong to a statutory category, namely it is a process claim. Likewise, claim 14 is an apparatus claim. Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim., Under Step 2A, Prong One, the broadest reasonable interpretation of the steps recited in Claim 1 include at least one judicial exception, that being a mathematical process. This can be seen in the claimed process steps of “invoking…programming of a localization loop of a map localizer and a motion estimator in response to the processor detecting the geo-denied state…” (See, for example, FIGS. 3-4; ¶¶60-63, ¶¶84-85, of the instant specification), “generating…an estimated location of the automated vehicle…” (See, for example, FIGS. 3-4; ¶86, of the instant specification), and “generating…an estimated motion of the automated vehicle…” (See, for example, FIGS. 3-4; ¶87, of the instant specification), each of which encompasses mathematical concepts requiring specific mathematical calculations (The “Kalman filter” described in ¶68 ¶75, and ¶87 of the instant specification.) to perform the method for geo-denied localization for an automated vehicle, and therefore encompasses mathematical concepts. For example, when given the broadest reasonable interpretation in light of the specification, the steps of “creating an orthogonal partial least square (OPLS),” “invoking,” “generating,” and “generating” are performed using one or more algorithms (model(s)). Claim 14 recites analogous judicial exceptions. In claim 1, the steps of: “invoking,” “generating,” and “generating” each fall within the mathematical concepts grouping of abstract ideas. The recited process steps are considered together as a single abstract idea for further analysis. Claim 14 recites similar abstract ideas. (Step 2A, Prong One: YES). Step 2A, Prong Two of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception(s) into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG Section III(A)(2), 84 Fed. Reg. at 54-55. Each of the process steps “invoking,” “generating,” and “generating” are recited as being performed by a computer (“The autonomy system 250 may include hardware and software components for a perception system, including a camera system 220, a LiDAR system 222, a radar system 232, a GNSS receiver 208, an IMU 224, and/or a perception module 202. The autonomy system 250 may further include a transceiver 226, a processor 210, a memory 214, a map localizer 204 (sometimes referred to as a "mapping/localization module"), and a vehicle control module 206 (sometimes referred to as an "operating module").” FIG. 2; ¶36, of the instant specification). The computer is recited at a high level of generality (“processor”). The computer is used as a tool to perform the generic computer functions of collecting data and performing the recited process steps. The computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). The recited process steps comprise an “insignificant extra-solution” activity(ies). See MPEP 2106.05(g) “Insignificant Extra-Solution Activity,” Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978). It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the controller does not affect this analysis. See MPEP 2106.05(g) “Insignificant Extra-Solution Activity,” Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978). Claim 1 also recites the limitation of “detecting…a geo-denied state of the automated vehicle based upon geo-location data from a geo-location device of the automated vehicle” (See, for example, FIG. 3; ¶62, of the instant specification). Likewise, Claim 14 recites the limitation of “…generating sensor data…” (See, for example, FIG. 3; ¶62, of the instant specification). However, each of these recited limitations comprises a data gathering step which merely comprises an “insignificant extra-solution” {post-solution} activity(ies). See MPEP 2106.05(g) “Insignificant Extra-Solution Activity,” Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978). It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the controller does not affect this analysis. See MPEP 2106.05(g) “Insignificant Extra-Solution Activity,” Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978). Claim 1 also recites the additional elements (equipment) of “an automated vehicle” (See, for example, FIG. 1; ¶28, of the instant specification), “a geo-location device” (See, for example, FIG. 1; ¶32, of the instant specification), “a processor” (See, for example, FIG. 2; ¶36, of the instant specification), “a LiDAR sensor,” and “plurality of sensors” (See, for example, FIG. 2; ¶32, of the instant specification). Claim 1 additionally recites data comprising “a geo-denied state” (See, for example, FIG. 3; ¶60, of the instant specification), and “geo-location data” (See, for example, FIG. 2; ¶¶32, ¶37, of the instant specification), “LiDAR data of the sensor data” (See, for example, FIG. 2; ¶¶32, FIGS. 3-4; ¶86, of the instant specification). However, these additional elements merely comprise generic conventional non-specific equipment, and computer hardware and software elements, and data/information, and is/are set forth at a highly generic level and each of which comprise an “insignificant extra-solution” activity(ies). See MPEP 2106.05(g) “Insignificant Extra-Solution Activity,” Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978). Claim 14 recites analogous additional elements. The recited additional elements can also be viewed as nothing more than an attempt to generally link the use of the judicial exceptions to the technological environment of a computer. Noting MPEP 2106.04(d)(I): “It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2A Prong Two. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception does not guarantee eligibility. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014) ("The fact that a computer ‘necessarily exist[s] in the physical, rather than purely conceptual, realm,’ is beside the point")”. Thus, under Step 2A, Prong Two of the analysis, even when viewed in combination, these additional elements recited in claim 1, as well as claim 14, do not integrate the recited judicial exception into a practical application and the claim is directed to the judicial exception. No specific practical application is associated with the claimed method. For instance, nothing is done once either the estimated location or estimated motion of the automated vehicle is generated. Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, as described above with respect to Step 2A Prong Two, merely amount to a general purpose computer system that attempts to apply the abstract idea in a technological environment, limiting the abstract idea to a particular field of use, and/or merely insignificant extra-solution activity (Claims 1, 14). Such insignificant extra-solution activity, e.g. data gathering and output, when re-evaluated under Step 2B is further found to be well-understood, routine, and conventional as evidenced by MPEP 2106.05(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, and electronically scanning or extracting data from a physical document). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that claim 1, as well as claim 14, amount to significantly more than the abstract idea. Therefore, claim 1, as well as claim 14, is not patent eligible under 101. With regards to the dependent claims, claims 2-13, and 15-20, provide additional features/steps which are part of an expanded algorithm, so these limitations should be considered part of an expanded abstract idea of the independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of the appropriate paragraphs of AIA 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 11-12, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Thomas (U.S. Patent Publication 2023/0236022 A1); in view of Ben-Moshe (WIPO | PCT Patent Publication WO 2020/003319 A1 – Provided with this action.). Regarding claim 1, Thomas teaches a method localization for an automated vehicle (Thomas: FIGS. 1-2, 6A-6B, 7; Abstract; ¶¶002 [“Localization refers to a process of an autonomous vehicle determining its location, including a position and orientation.”]), the method comprising: detecting, by a processor of an automated vehicle, a state of the automated vehicle based upon geo-location data from a geo-location device of the automated vehicle (Thomas: FIGS. 1-2; ¶46 [“…vehicle 200 includes at least one platform sensor (not explicitly illustrated) that measures or infers properties of a state or a condition of vehicle 200. In some examples, vehicle 200 includes platform sensors such as a global positioning system (GPS) receiver, an inertial measurement unit (IMU), a wheel speed sensor, a wheel brake pressure sensor…”]); invoking, by the processor, programming of a localization loop of a map localizer in response to the processor detecting the state of the automated vehicle (Thomas: ¶19 [“…embodiments, systems, methods, and computer program products described herein include and/or implement localization functional safety. A vehicle (such as an autonomous vehicle) establishes its position and orientation through localization. Localization is based on a representation of the environment, wherein the vehicle interprets the representation of the environment and other data to determine its position and orientation. Source point cloud data is iteratively processed to calculate a transformation between the point cloud and a map. A rotation matrix and transformation vector are used to calculate a pose of the vehicle based on a transform calculated from the iterative localization. Localization also computes a first metric associated with the calculated pose.”] {Each “localization” necessarily comprises a “loop,” as recited in claim 1, with a first iteration of the localization taught by Thomas analogous to the “first iteration of the localization loop” recited in the claim.}; FIG. 4; ¶¶57-61 [“…autonomous vehicle compute 400 includes perception system 402 (sometimes referred to as a perception module), planning system 404 (sometimes referred to as a planning module), localization system 406…planning system 404 periodically or continuously receives data from perception system 402 (e.g., data associated with the classification of physical objects, described above) and planning system 404 updates the at least one trajectory or generates at least one different trajectory based on the data generated by perception system 402. In some embodiments, planning system 404 receives data associated with an updated position of a vehicle (e.g., vehicles 102) from localization system 406 and planning system 404 updates the at least one trajectory or generates at least one different trajectory based on the data generated by localization system 406.”]; FIG. 6A; ¶¶74-75 [“The LiDAR map prior 616 is stored in a database, such as the database 410 (FIG. 4). In examples, the LiDAR localization 612 determines the position of the vehicle in the environment based on a comparison between a point cloud (e.g., LiDAR point cloud 614) and a map (e.g., LiDAR map prior 616). In some embodiments, the LiDAR map prior 616 includes a combined point cloud of the environment, where the combined point cloud is a combination of multiple point clouds associated with the environment.…the localization function 612 outputs a LiDAR pose 618. In the example of FIG. 6A, the localization function 612 operates according to iterative closest point (ICP) localization at an ASIL-B level. An iterative closest point process uses LiDAR sensor data (e.g., LiDAR point cloud 614) and a map (e.g., LiDAR map prior 616) to calculate the pose that minimizes the squared error between point clouds. ICP localization minimizes a difference between two point clouds. The iterative closest point process is used to reconstruct a 2D or 3D environment in which to localize the AV and achieve optimal path planning. In some embodiments, a transformation is calculated that aligns two point clouds. At each iteration, a correspondence between the source and target point clouds is updated, and the transformation that best aligns them is iteratively determined until convergence is attained.”]); during execution of a first iteration of the localization loop (Thomas: FIG. 4; ¶¶57-61, FIG. 6A; ¶¶74-75, ¶19 [“…embodiments, systems, methods, and computer program products described herein include and/or implement localization functional safety. A vehicle (such as an autonomous vehicle) establishes its position and orientation through localization. Localization is based on a representation of the environment, wherein the vehicle interprets the representation of the environment and other data to determine its position and orientation. Source point cloud data is iteratively processed to calculate a transformation between the point cloud and a map. A rotation matrix and transformation vector are used to calculate a pose of the vehicle based on a transform calculated from the iterative localization. Localization also computes a first metric associated with the calculated pose.”] {Each “localization” necessarily comprises a “loop,” as recited in claim 1, with a first iteration of the localization taught by Thomas analogous to the “first iteration of the localization loop” recited in the claim.}): generating, by the processor, an estimated location of the automated vehicle by applying the map localizer on LiDAR data of the sensor data obtained for the first iteration from a LiDAR sensor of the plurality of sensors (Thomas: FIG. 4; ¶¶57-61, FIG. 6A; ¶¶74-75 {See above.}); and generating, by the processor, an estimated motion of the automated vehicle by applying the estimated location from the map localizer and on the sensor data obtained for the first iteration (Thomas: FIGS. 1-2; ¶24 [“…collectively as routes 106) are each associated with (e.g., prescribe) a sequence of actions (also known as a trajectory) connecting states along which an AV can navigate. Each route 106 starts at an initial state (e.g., a state that corresponds to a first spatiotemporal location, velocity, and/or the like) and a final goal state (e.g., a state that corresponds to a second spatiotemporal location that is different from the first spatiotemporal location) or goal region (e.g. a subspace of acceptable states (e.g., terminal states))…routes 106 include a plurality of acceptable state sequences (e.g., a plurality of spatiotemporal location sequences), the plurality of state sequences associated with (e.g., defining) a plurality of trajectories.”]; FIG. 6A; ¶75 [“…the localization function 612 outputs a LiDAR pose 618. In the example of FIG. 6A, the localization function 612 operates according to iterative closest point (ICP) localization at an ASIL-B level. An iterative closest point process uses LiDAR sensor data (e.g., LiDAR point cloud 614) and a map (e.g., LiDAR map prior 616) to calculate the pose that minimizes the squared error between point clouds.”] {As disclosed in ¶¶70-72 of the instant specification.} {See above.}). However, Thomas fails to explicitly teach a geo-denied state of the automated vehicle, or a motion estimator. Ben-Moshe, in an analogous art, discloses a method for fusing image-based navigation with additional location inputs to obtain a more accurate location and a particle filter method for converging candidate locations (Ben-Moshe: pg. 1, ln 10-15). Therein, Ben-Moshe teaches detecting a geo-denied state of an automated vehicle (Ben-Moshe: pg. 55, ln 12-33 [“…automobile 2,5D navigation in GNSS denied environments. However, additional devices can benefit from such navigation. In the exemplary parking embodiments, a complementary sensor-based mechanism for accurate automobile navigation in underground parking-lots and freeway tunnels is described. The system and methods described harnesses, amongst other inputs, road-based events (e.g., crossing a speed-bump) as detected by a phone's sensors in order to estimate the vehicle's ego-location. The novelty of the system and methods described includes characterization and detection of the road based events. The implementation offers a GNSS-level of accuracy in GNSS-denied environments, a feature unavailable today.”], and a motion estimator for estimating motion of the automated vehicle (Ben-Moshe: FIGS. 5- 7, 11B, pg. 40, ln 6-9 [“The current air pressure can be used in order to detect height changes at sub meter accuracy. Using a Kalman filter we were able to detect changes in height (i.e., detecting movement from one floor to another).”]; FIGS. 5- 7, 11B, pg. 47, ln 30-31 [“…the improved algorithm may optionally fuse the 3D optical-flow sensor reading with the barometer sensor, in some cases using a Kalman filter.”]). It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the features of detecting a geo-denied state of an automated vehicle and providing a motion estimator for estimating motion of the automated vehicle, disclosed by Ben-Moshe, into Thomas, with the motivation and expected benefit facilitating navigation of an automated vehicle through an environment without the geolocation data. This method for improving Thomas was within the ordinary ability of one of ordinary skill in the art based on the teachings of Ben-Moshe. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Thomas and Ben-Moshe to obtain the invention as specified in claim 1. Regarding claim 14, the claim recites limitations found within claim 1, and is rejected under the same rationale applied to the rejection of claim 1. Regarding claim 2, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Thomas discloses generating an estimated motion of the automated vehicle generated for the first iteration (Thomas: FIGS. 1-2; ¶24 [“…collectively as routes 106) are each associated with (e.g., prescribe) a sequence of actions (also known as a trajectory) connecting states along which an AV can navigate. Each route 106 starts at an initial state (e.g., a state that corresponds to a first spatiotemporal location, velocity, and/or the like) and a final goal state (e.g., a state that corresponds to a second spatiotemporal location that is different from the first spatiotemporal location) or goal region (e.g. a subspace of acceptable states (e.g., terminal states))…routes 106 include a plurality of acceptable state sequences (e.g., a plurality of spatiotemporal location sequences), the plurality of state sequences associated with (e.g., defining) a plurality of trajectories.”]) and generating an operating instruction for the automated vehicle using the estimated location and the estimated motion generated for the first iteration (Thomas: FIGS. 3-4; ¶¶52-57 [“…device 300 performs one or more processes described herein. Device 300 performs these processes based on processor 304 executing software instructions stored by a computer-readable medium…Referring now to FIG. 4, illustrated is an example block diagram of an autonomous vehicle compute 400 (sometimes referred to as an "AV stack"…perception system 402, planning system 404, localization system 406, control system 408, and database 410 are included in one or more standalone systems (e.g., one or more systems that are the same as or similar to autonomous vehicle compute 400 and/or the like). In some examples, perception system 402, planning system 404, localization system 406, control system 408, and database 410 are included in one or more standalone systems that are located in a vehicle and/or at least one remote system as described herein. In some embodiments, any and/or all of the systems included in autonomous vehicle compute 400 are implemented in software (e.g., in software instructions stored in memory)…”]). Regarding claim 15, the claim recites limitations found within claim 2, and is rejected under the same rationale applied to the rejection of claim 2. Regarding claim 3, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Thomas additionally discloses during execution of a later iteration of the localization loop, updating, by the processor, the estimated location of the automated vehicle for the later iteration by applying the map localizer on the estimated motion for an earlier iteration and the LiDAR data of the sensor data obtained for the later iteration (Thomas: [“…embodiments, systems, methods, and computer program products described herein include and/or implement localization functional safety. A vehicle (such as an autonomous vehicle) establishes its position and orientation through localization. Localization is based on a representation of the environment, wherein the vehicle interprets the representation of the environment and other data to determine its position and orientation. Source point cloud data is iteratively processed to calculate a transformation between the point cloud and a map. A rotation matrix and transformation vector are used to calculate a pose of the vehicle based on a transform calculated from the iterative localization. Localization also computes a first metric associated with the calculated pose.”] {Each “localization” necessarily comprises a “loop,” as recited in claim 1, with a first iteration of the localization taught by Thomas analogous to the “first iteration of the localization loop” recited in the claim.}; FIG. 6A; ¶¶74-75 [“The LiDAR map prior 616 is stored in a database, such as the database 410 (FIG. 4). In examples, the LiDAR localization 612 determines the position of the vehicle in the environment based on a comparison between a point cloud (e.g., LiDAR point cloud 614) and a map (e.g., LiDAR map prior 616). In some embodiments, the LiDAR map prior 616 includes a combined point cloud of the environment, where the combined point cloud is a combination of multiple point clouds associated with the environment.…the localization function 612 outputs a LiDAR pose 618. In the example of FIG. 6A, the localization function 612 operates according to iterative closest point (ICP) localization at an ASIL-B level. An iterative closest point process uses LiDAR sensor data (e.g., LiDAR point cloud 614) and a map (e.g., LiDAR map prior 616) to calculate the pose that minimizes the squared error between point clouds. ICP localization minimizes a difference between two point clouds. The iterative closest point process is used to reconstruct a 2D or 3D environment in which to localize the AV and achieve optimal path planning. In some embodiments, a transformation is calculated that aligns two point clouds. At each iteration, a correspondence between the source and target point clouds is updated, and the transformation that best aligns them is iteratively determined until convergence is attained.”]); Regarding claim 4, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 3 as shown above. Thomas additionally discloses identifying, by the processor, an outlier measurement based on the sensor data of the later iteration exceeding an error measurement threshold, thereby detecting an error in an output of the map localizer of the localization loop (Thomas: FIG. 7; ¶¶103-104 [“.…a deviation between the first metric and the second metric is determined. In some embodiments, the vehicle localization is validated when the deviation is less than a predetermined threshold. In embodiments, the deviation between the first metric and the second metric is monitored, and a deviation that exceeds the predetermined threshold is indicative of a malfunction. In embodiments, a malfunction is a hardware failure (e.g., system, sensors or devices), fault, or software failure…a plausibility of vehicle localization is determined prior to iteratively processing the point cloud data to determine the localization transformation. The plausibility refers to a series of plausibility checks wherein a range, rate, and time duration are evaluated in view of predetermined thresholds. In some examples, the checker provides thresholds to conduct a plausibility check of the range of each localization output for each localization cycle. The following thresholds can be evaluated independently or in combination to determine the plausibility of the vehicle localization…a first threshold provides that a translational distance between poses shall not exceed a distance that is a function of a first predetermined value, such as velocity with a calibration buffer according to a checker sample time…”]); Regarding claim 16, the claim recites limitations found within claims 3 and 4, and is rejected under the same rationale applied to the rejection of claims 3 and 4. Regarding claim 5, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 3 as shown above. Thomas additionally discloses detecting the outlier measurement includes determining, by the processor, the outlier measurement is based upon a time threshold (Thomas: FIG. 7; ¶¶103-104 {See above.}). Regarding claim 6, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Ben-Moshe discloses a motion estimator for estimating motion of the automated vehicle (Ben-Moshe: a motion estimator for estimating motion of the automated vehicle (Ben-Moshe: FIGS. 5- 7, 11B, pg. 40, ln 6-9, FIGS. 5- 7, 11B, pg. 47, ln 30-31 {See above.}). Thomas additionally discloses during execution of a later iteration of the localization loop, updating, by the processor, the estimated motion of the automated vehicle for the later iteration by applying the map estimator on the estimated location generated by the motion estimator for the later iteration and the sensor data obtained for the later iteration (Thomas: FIGS. 1-2, 6A-6B, 7; ¶19, FIG. 4; ¶¶57-61, FIG. 6A; ¶¶74-75 {See above.}). It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the features of during execution of a later iteration of the localization loop, updating the estimated motion of the automated vehicle for the later iteration by applying the map estimator on the estimated location generated by the motion estimator for the later iteration and the sensor data obtained for the later iteration, disclosed by Thomas and Ben-Moshe, into Thomas, as modified by Ben-Moshe, with the motivation and expected benefit facilitating navigation of an automated vehicle through an environment without the geolocation data. This method for improving Thomas, as modified by Ben-Moshe, was within the ordinary ability of one of ordinary skill in the art based on the teachings of Thomas and Ben-Moshe. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Thomas and Ben-Moshe to obtain the invention as specified in claim 6. Regarding claim 7, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 6 as shown above. Thomas discloses identifying an outlier measurement based on the sensor data for the later iteration exceeding an error measurement threshold, thereby detecting an error in an output of the motion estimator of the localization loop (Thomas: FIG. 7; ¶¶103-104 {See above.}). Regarding claim 17, the claim recites limitations found within claims 6 and 7, and is rejected under the same rationale applied to the rejection of claims 6 and 7. Regarding claim 8, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Thomas discloses responsive to detecting an error in at least one of the input or the output of the map localizer, applying, by the processor, a covariance boosting value on the output of the map localizer (Thomas: FIG. 7; ¶¶103-104 {See above.}; FIG. 6A; ¶82 [“…the first error metric and the second error metric are a transformation or pose sum of squares. In embodiments, the first metric is a covariance of the transform as applied to the iteratively processed point cloud and map, and the second metric is a covariance of the transform as applied to the source point cloud and the map point cloud. As used herein, the source point cloud is the point cloud provided as input to the localization function and checker. In some embodiments, the checker determines if the first error metric as calculated by the localization function 612 matches the second error metric as calculated by the LIDAR checker 630. The results should match precisely and be below a predefined accuracy threshold.”]). Regarding claim 18, the claim recites limitations found within limitations found within claim 8, and is rejected under the same rationale applied to the rejection of claim 8. Regarding claim 9, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Thomas discloses responsive to detecting an error in the map localizer, obtaining, by the processor, stored sensor data stored in a buffer memory (Thomas: FIGS. 6A-6B; ¶¶79-85 [“…the checker (e.g., checker 630 of FIG. 6A) determines one or more thresholds to check the error between the scan matching sum of squared error metric from the iterative closest point localization (e.g., localization 612 of FIG. 6A) and a sum of squared error metric from the checker. A flag is set by the checker when the threshold is exceeded. In some embodiments, the flag is transmitted to an AV monitoring system for further processing. In examples, an AV monitoring system is enabled by the autonomous system 202 (FIG. 2). The AV monitoring system observes the status of the AV and can issue alerts or messages associated with the AV status…A LiDAR checker 630 verifies the accuracy of localization output from the localization function 612 by comparing it with a second error metric calculated by the LiDAR checker 630. In some embodiments, the LiDAR checker 630 enables a doer-checker process that is designed to satisfy IS026262-related functional Safety requirements by performing software/hardware fault checking via the sampling of a set of outputs from the "doer" process (e.g., localization 612, 622) and evaluating that the checker 630 can reproduce key metrics/components of the "doer." The LiDAR checker 630 receives the same input as the localization function 612…the localization function 612 receives as input the LiDAR point cloud 614 and the LiDAR map prior 616. The localization function 612 also receives as input transformation or pose sum of squares output by the localization function 612. The LiDAR checker 630 outputs a pass/fail evaluation 635 of the localization function output…iterative closest point localization (e.g., localization 612) outputs a pose that is validated by LiDAR checker 630 to establish ASIL B(D) quality. The LiDAR checker 630 executes a scan matching checker on hardware of ASIL B(D) quality. The LiDAR checker 630 takes as input a LiDAR map prior 616, a LiDAR point cloud 614, as well as a pose output by the localization 612. The residual sum of squares error, covariance, or any combinations thereof are calculated by the LiDAR checker using a modified iterative closest point algorithm and then compared to the output of an original iterative closest point algorithm executing on hardware at a QM(D) quality.”] {Including Eqns. 1-3.}). Regarding claim 11, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Thomas discloses responsive to detecting an error in at least one of the motion estimator or the map localizer, halting, by the processor, execution of the localization loop (Thomas: FIG. 7; ¶¶100-103 [“…a localization transformation is generated. In some embodiments, a first metric associated with vehicle localization based on the transform is calculated by the localization function and used for comparison by the corresponding checker function…the input to the checker is a secondary map, secondary point cloud, as well as the secondary pose from the original ICP…a deviation between the first metric and the second metric is determined. In some embodiments, the vehicle localization is validated when the deviation is less than a predetermined threshold. In embodiments, the deviation between the first metric and the second metric is monitored, and a deviation that exceeds the predetermined threshold is indicative of a malfunction. In embodiments, a malfunction is a hardware failure (e.g., system, sensors or devices), fault, or software failure (e.g., systematic software failure). In some embodiments, actions are taken in response to the malfunction. For example, the vehicle safely halts navigation, such as by navigating to a location and preventing further navigation…”] {Including Eqns. 1-3.}). Regarding claim 19, the claim recites limitations found within limitations found within claim 8, and is rejected under the same rationale applied to the rejection of claim 11. Regarding claim 12, Thomas, in view of Ben-Moshe, teach all the limitations of the parent claim 1 as shown above. Thomas additionally discloses detecting the geo-denied state includes, determining, by the processor, that a time-expiration threshold elapsed for receiving updated geolocation data from a geolocation service (Thomas: FIG. 6A; ¶82, FIG. 7; ¶¶103-104 {See above.}). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Publication 2023/0204364 A1, to Nachstedt et al., discloses ascertaining a starting position of a vehicle for a localization of the vehicle using a control unit. U.S. Patent Publication 2022/0163616 A1, to Rappaport et al., is directed to real time imaging of an environment and position location of a mobile or portable (e.g., moveable or attachable or handheld) device, with the assistance of one or more additional wireless devices, which may include one or more portable devices, base stations (BS) or Wi-Fi hotspots. U.S. Patent Publication 2016/0227366 A1, to Georgy et al., is directed to integrated navigation integrating wireless measurements including at least angle of arrival (AOA) measurements with a navigation solution. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFREY P AIELLO whose telephone number is (303) 297-4216. The examiner can normally be reached on 8 AM - 4:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shelby Turner can be reached on (571) 272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEFFREY P AIELLO/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Jun 13, 2023
Application Filed
Feb 10, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578716
CONFIGURABLE FAULT TREE STRUCTURE FOR ELECTRICAL EQUIPMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12578265
PREPARATION METHOD FOR PREPARING SPECTROMETRIC DETERMINATIONS OF AT LEAST ONE MEASURAND IN A TARGET APPLICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12571301
DAS Data Processing to Identify Fluid Inflow Locations and Fluid Type
2y 5m to grant Granted Mar 10, 2026
Patent 12571302
Surveillance Using Particulate Tracers
2y 5m to grant Granted Mar 10, 2026
Patent 12564975
DETERMINING A DEVICE LOCATION A BODY PART
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+24.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 599 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month