Prosecution Insights
Last updated: April 19, 2026
Application No. 18/191,656

ENHANCED POSITIONING SYSTEM AND METHODS THEREOF

Final Rejection §103§112
Filed
Mar 28, 2023
Examiner
KOSSEK, MAGDALENA IZABELLA
Art Unit
2117
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Theodolite Technology Inc. D/B/A Att Metrology Solutions
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
5 granted / 7 resolved
+16.4% vs TC avg
Strong +40% interview lift
Without
With
+40.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
27 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
13.5%
-26.5% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
19.8%
-20.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made final. Claims 1-15 filed on 01/20/2026 have been reviewed and considered by this office action. Claims 1, 2, 4, 6, 8, 9, and 11-13 have been amended. Information Disclosure Statement The information disclosure statement filed on 01/20/2026 has been reviewed and considered by this office action. Drawings The drawings filed on 01/20/2026 have been reviewed and are considered acceptable. Claim Objections Claim 6 is objected to because of the following informalities: Amended claim 6 recites “projecting the object positional data stream and the single integrated data into a future time.” Examiner believes this was meant to read “projecting the object positional data stream and the single integrated data stream into a future time.” Appropriate correction is required. Response to Arguments Applicant’s amended claims, filed 01/20/2026, have overcome the rejections under 35 U.S.C. § 101. Applicant’s arguments regarding the rejection of claims 1 and 4 under 35 U.S.C. § 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Duan. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites the limitation “determining an expected offset caused by at least one of calculation latency or machine-implementation latency by projecting the object positional data stream and the single integrated data into a future time,” which is now also included in amended claim 4. It is unclear whether this refers to a different offset or the same offsets determined in claim 4. Examiner interprets this to mean one of the same offsets is determined. Claims 7-10 are rejected due to their dependency upon rejected claims and are rejected for the same reasons as outlined above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Woodside et al. (US 2023/0075352 A1), in view of Duan et al. (Duan, P., Duan, Z., Li, S., & Chen, Y. (2018). “Motion prediction and delay compensation for improved teleoperation of an exoskeletal robot.” Advances in Mechanical Engineering 10, no. 2: 1687814018760353), herein Duan. Regarding claim 1, Woodside teaches a system, comprising: a plurality of sensors, each configured to generate a data stream to produce a plurality of data streams ([0003] The present disclosure relates to dynamic compensation for errors in the position and orientation of a robot end effector, and more particularly dynamically compensating for errors in the position and orientation of a robot end effector utilizing a kinematic error observer algorithm. Even more specifically, this disclosure relates to using an external high-precision metrology tracking system, such as a laser tracker system, to directly measure robot kinematic errors such that corrections are implemented; [0033]: “the metrology tracking system has two components, the 6 DoF sensor and the laser tracker”); one or more processors in communication with one or more memory having machine readable instructions stored thereon ([0027]: “The apparatuses/systems and methods described herein can be implemented at least in part by one or more computer program products comprising one or more non-transitory, tangible, computer-readable mediums storing computer programs with instructions that may be performed by one or more processors. The computer programs may include processor executable instructions and/or instructions that may be translated or otherwise interpreted by a processor such that the processor may perform the instructions”) that when executed by the one or more processors are configured to: integrate the plurality of data streams into a single integrated data stream providing an actual positional data stream for an object under observation ([0033]: “Position and orientation measurements collected by the laser tracker and 6 DoF sensor, respectively, are combined through a proprietary method to create a single measurement of the position and orientation of the 6 DoF sensor, and hence the actual position and orientation of the end effector”); receive an object positional data stream from the object under observation ([0007]: “The computer is configured to receive the robot measurement signal corresponding to the kinematic position and orientation of the end effector from the robot control system”); compare the object positional data stream to the single integrated data stream to determine a kinematic offset between an actual position of the object under observation and a programmed position of the object under observation ([0006]: “kinematic error is the difference between the location of the robot's end effector measured by the robot controller referred to as the kinematic location, and the actual location measured by the metrology tracking system. The term 'location', as used in this disclosure, means both position and orientation. The kinematic location is computed from the robot's encoder measurements mapped through the robot's forward kinematic model… When the kinematic location is compared to that of the actual location, provided by the metrology tracking system, these errors can be identified and corrected”); provide control instructions to the object under observation to account for the kinematic offset and the expected offsets and reposition the object under observation from the actual position to an intended position ([0034]: “At runtime, the robot measurement is matched to the tracker measurement, the matched set of measurements are used to compute a kinematic error measurement, a kinematic error estimate is computed from the kinematic error measurement, and a rounded incremental correction of the end effectors position and orientation are computed from the kinematic error estimate. The incremental correction command is then transmitted to the robot controller where it is used to correct the position and orientation of the robot's end effector,” where the feedback loop implementing the incremental correction is shown in Fig. 1). Woodside does not explicitly teach wherein at least one of the plurality of sensors comprises an inertial measurement unit (IMU). Also, while Woodside teaches determining an expected temporal offset ([0045]: “Find the average relative delay, E ( δ r ) , by measuring the average temporal offset from the plot”), Woodside does not explicitly teach “determine expected offsets based on calculation and machine implementation latency wherein determining the expected offsets comprises projecting at least one of the object positional data stream and the single integrated data stream into a future time corresponding to implementation of control instructions.” Duan further teaches wherein at least one of the plurality of sensors comprises an inertial measurement unit (IMU) (Page 2, The teleoperation system: “The developed motion capture system based on IMUs is shown in Figure 2”), and determine expected offsets based on calculation and machine implementation latency wherein determining the expected offsets comprises projecting at least one of the object positional data stream and the single integrated data stream into a future time corresponding to implementation of control instructions (Pages 5-6, Delay compensation: “As the delay time k ∆ t exists, the predicted human joint angle position y ^ ( t + k ∆ t ) at the time of k ∆ t ahead is sent to the actuator to achieve compensation in each controlling cycle as demonstrated in Figure 8”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt the system of Woodside to incorporate the teachings of Duan so as to include at least one of the plurality of sensors comprising an inertial measurement unit (IMU) and determining expected offsets based on calculation and machine implementation latency wherein determining the expected offsets comprises projecting at least one of the object positional data stream and the single integrated data stream into a future time corresponding to implementation of control instructions. Doing so would allow compensation for latency with the aim of minimizing tracking delay (Abstract: “To teleoperate a robot in real-time with specific human motion such as walking, prediction and compensation algorithms are needed to minimize the tracking delay”). Regarding claim 3, Woodside in view of Duan teaches the system of claim 1. Woodside further teaches wherein the one or more processors in communication with one or more memory having machine readable instructions stored thereon that when executed by the one or more processors are further configured to: provide a kinematic feedback loop to the object under observation to correct the actual position of the object under observation to provide for positional precision and accuracy of the object under observation in real time ([0034]: “At runtime, the robot measurement is matched to the tracker measurement, the matched set of measurements are used to compute a kinematic error measurement, a kinematic error estimate is computed from the kinematic error measurement, and a rounded incremental correction of the end effectors position and orientation are computed from the kinematic error estimate. The incremental correction command is then transmitted to the robot controller where it is used to correct the position and orientation of the robot's end effector,” where the feedback loop is shown in Fig. 1). Regarding claim 4, Woodside in view of Duan teaches a method, comprising: providing a plurality of sensors configured to observe an object under observation ([0003] The present disclosure relates to dynamic compensation for errors in the position and orientation of a robot end effector, and more particularly dynamically compensating for errors in the position and orientation of a robot end effector utilizing a kinematic error observer algorithm. Even more specifically, this disclosure relates to using an external high-precision metrology tracking system, such as a laser tracker system, to directly measure robot kinematic errors such that corrections are implemented; [0033]: “the metrology tracking system has two components, the 6 DoF sensor and the laser tracker”); generating a plurality of data streams from each of the plurality of sensors ([0007]: “The tracker generates a tracker measurement signal corresponding to the actual position and orientation of the end effector as the end effector moves toward its desired position and orientation and supplies the tracker measurement signal to a computer”); integrating the plurality of data streams into a single integrated data stream providing an actual positional data stream for the object under observation ([0033]: “Position and orientation measurements collected by the laser tracker and 6 DoF sensor, respectively, are combined through a proprietary method to create a single measurement of the position and orientation of the 6 DoF sensor, and hence the actual position and orientation of the end effector”); receiving an object positional data stream from the object under observation ([0007]: “The computer is configured to receive the robot measurement signal corresponding to the kinematic position and orientation of the end effector from the robot control system”); comparing the object positional data stream to the single integrated data stream to determine a kinematic offset between an actual position of the object under observation and a programmed position of the object under observation ([0006]: “kinematic error is the difference between the location of the robot's end effector measured by the robot controller referred to as the kinematic location, and the actual location measured by the metrology tracking system. The term 'location', as used in this disclosure, means both position and orientation. The kinematic location is computed from the robot's encoder measurements mapped through the robot's forward kinematic model… When the kinematic location is compared to that of the actual location, provided by the metrology tracking system, these errors can be identified and corrected”); providing control instructions to the object under observation to account for the kinematic offset and the expected offsets and reposition the object under observation from the actual position to an intended position ([0034]: “At runtime, the robot measurement is matched to the tracker measurement, the matched set of measurements are used to compute a kinematic error measurement, a kinematic error estimate is computed from the kinematic error measurement, and a rounded incremental correction of the end effectors position and orientation are computed from the kinematic error estimate. The incremental correction command is then transmitted to the robot controller where it is used to correct the position and orientation of the robot's end effector,” where the feedback loop implementing the incremental correction is shown in Fig. 1). While Woodside teaches determining an expected temporal offset ([0045]: “Find the average relative delay, E ( δ r ) , by measuring the average temporal offset from the plot”), Woodside does not explicitly teach “determining expected offsets based on calculation and machine implementation latency wherein determining the expected offsets comprises projecting at least one of the object positional data stream and the single integrated data stream into a future time corresponding to implementation of control instructions.” Duan further teaches determining expected offsets based on calculation and machine implementation latency wherein determining the expected offsets comprises projecting at least one of the object positional data stream and the single integrated data stream into a future time corresponding to implementation of control instructions (Pages 5-6, Delay compensation: “As the delay time k ∆ t exists, the predicted human joint angle position y ^ ( t + k ∆ t ) at the time of k ∆ t ahead is sent to the actuator to achieve compensation in each controlling cycle as demonstrated in Figure 8”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt the system of Woodside to incorporate the teachings of Duan so as to include determining expected offsets based on calculation and machine implementation latency wherein determining the expected offsets comprises projecting at least one of the object positional data stream and the single integrated data stream into a future time corresponding to implementation of control instructions. Doing so would allow compensation for latency with the aim of minimizing tracking delay (Abstract: “To teleoperate a robot in real-time with specific human motion such as walking, prediction and compensation algorithms are needed to minimize the tracking delay”). Regarding claim 5, Woodside in view of Duan teaches the method of claim 4. Woodside further teaches further comprising providing a kinematic feedback loop to the object under observation to correct the actual position of the object under observation to be the programmed position of the object under observation to provide for positional precision and accuracy of the object under observation ([0034]: “At runtime, the robot measurement is matched to the tracker measurement, the matched set of measurements are used to compute a kinematic error measurement, a kinematic error estimate is computed from the kinematic error measurement, and a rounded incremental correction of the end effectors position and orientation are computed from the kinematic error estimate. The incremental correction command is then transmitted to the robot controller where it is used to correct the position and orientation of the robot's end effector,” where the feedback loop implementing the incremental correction is shown in Fig. 1). Regarding claim 13, Woodside in view of Duan teaches the method of claim 4. Woodside further teaches wherein the plurality of sensors comprises at least one sensor selected from the group consisting of inertial measurement units (IMUs), laser trackers, laser scanners, cameras, distance systems, probing sensors, accelerometers, and robot encoders ([0033]: “the metrology tracking system has two components, the 6 DoF sensor and the laser tracker… The azimuth and elevation of the beam, determined by the laser tracker's encoders, and the distance of the beam are used to determine the 6 DoF sensor's position,” which corresponds to a distance system; [0032]: “The proprietary trajectory controller utilizes the forward kinematic model of the robot to convert the encoder (joint) measurements into a kinematic position and orientation of its tool flange for use in its control algorithm”). Regarding claim 14, Woodside in view of Duan teaches the method of claim 4. Woodside further teaches further comprising using forward path projections based on the comparison to provide control instructions to one or more objects within an environment for positional correction ([0006]: “The kinematic location is computed from the robot's encoder measurements mapped through the robot's forward kinematic model, that latter being an idealized nonlinear set of equations relating the position of the robot's joints to the location of its tool flange in Euclidian space”). Regarding claim 15, Woodside in view of Duan teaches the method of claim 4. Woodside further teaches further comprising rolling calibration of machines by post processing of the comparison to feed into a database for course correction based on a prior travel path of the object ([0048]: “After conversion, the leading measurements, identified by (3) from the steps in [0040] are stored in a lookup table of sufficient size (constructed using a Last in First Out (LIFO) buffer). Now, the effects of the relative time delay, discussed in [0039], are compensated by matching (Step 2.2) the robot measurements to the tracker measurements producing the set of (matched) measurements, ( T r b k ,     T s b k , t k k ), for the kth) iteration of the Kinematic Error Control System,” where the lookup table corresponds to a database that is used for course correction). Claims 2 and 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over Woodside et al. (US 2023/0075352 A1), in view of Duan, and in view of Alt et al. (US 2021/0023719 A1). Regarding claim 2, Woodside in view of Duan teaches the system of claim 1. Woodside further teaches wherein the one or more processors in communication with one or more memory having machine readable instructions stored thereon that when executed by the one or more processors are further configured to: synchronize the data streams, prior to integrating the plurality of data streams into the single integrated data stream ([0040]: “As mentioned in [0034] the robot and tracker measurements may be unsynchronized. Lack of synchronicity of the measurements will result in both a relative time delay between the two clock signals and jitter in each clock signal's timing. Each of these issues are addressed independently in the algorithmic procedure discussed below”; The positional data stream may be synchronized by matching “the lagging and interpolated leading measurements for the kth control iteration by, ( T r b [ k ] , T s b [ k ] , t k [ k ] ) = T r b , T ~ , t r       τ = 1   T ~ , T s b , t s         τ = 0 ,” as supported by [0052]). While Woodside teaches filtering the kinematic error estimate ([0093]: “To provide a single metric for each increase in the robot's corrected kinematic error, the spatial components of the corrected positional kinematic error were filtered independently using a zero-phase 6th order Butterworth filter with cutoff frequencies ranging between 0.1 Hz and 0.5 Hz”), Woodside and Duan do not explicitly teach “filter at least one of the plurality of data streams.” Additionally, while Woodside teaches interpolating the data points ([0051]: “Interpolate a leading measurement, T ~ , at t ~ from the leading measurement data, T 1 and T 2 , corresponding to the timestamps, t 1 and t 2 , by, T ~ = f i n t T 1 , t 1 , T 2 , t 2 ,   t ~   where f i n t   .   .   .   : ☐ 4 × 4 → ☐ 4 × 4 is the homogenous transformation interpolation function defined in the appendix”), Woodside and Duan do not explicitly teach “extrapolate information from at least one of the plurality of data streams.” Alt further teaches filter at least one of the plurality of data streams ([0132]: “Filtering of vision-based signals: The noise of signals obtained by the computing unit varies, depending on viewing conditions and the conditioning of the IK process. Noise is of special relevance for velocity signals derived from visual measurements. A Kalman filter, or similar, is employed for de-noising and estimation of the correct signal value”; [0135]: “Signals from multiple sources, such as signals from optional velocity encoders, see section 7, are combined using rate adoption, extrapolation and filtering”), extrapolate information from at least one of the plurality of data streams ([0133]: “The post-processor adapts the rates accordingly using common signal processing techniques like 'extrapolation' or 'hold last sample.' Position signals may be extrapolated linearly, based on the velocity readings”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt the system of Woodside in view of Duan to incorporate the teachings of Alt so as to include filtering at least one of the plurality of data streams and extrapolating information from at least one of the plurality of data streams. Doing so would allow robustness against measurement failure with the goal of acquiring accurate data ([0111]: “The described methods according to embodiments are designed to acquire visual data in a redundant fashion. This approach allows for noise reduction by fusion of multiple measurements and for robustness against failure of some measurements. Measurement failures of individual features may occur due to a number of reasons”). Regarding claim 6, Woodside in view of Duan teaches the method of claim 4. Woodside further teaches further comprising synchronizing the object positional data stream to the single integrated data stream ([0040]: “As mentioned in [0034] the robot and tracker measurements may be unsynchronized. Lack of synchronicity of the measurements will result in both a relative time delay between the two clock signals and jitter in each clock signal's timing. Each of these issues are addressed independently in the algorithmic procedure discussed below”; The positional data stream may be synchronized by matching “the lagging and interpolated leading measurements for the kth control iteration by, ( T r b [ k ] , T s b [ k ] , t k [ k ] ) = T r b , T ~ , t r       τ = 1   T ~ , T s b , t s         τ = 0 ,” as supported by [0052]). While Woodside teaches determining an expected temporal offset ([0045]: “Find the average relative delay, E ( δ r ) , by measuring the average temporal offset from the plot”), Woodside does not explicitly teach “determining an expected offset caused by at least one of calculation latency or machine-implementation latency by projecting the object positional data stream and the single integrated data into a future time.” Additionally, while Woodside teaches using the temporal offset to compute a kinematic error estimate, which is then used to implement a control command (FIG. 5 and [0057]: “The KEC algorithm computes a rounded incremental correction (Step 5) from the kinematic error estimate to be applied to the robot during the timestep of the control iteration”), Woodside does not explicitly teach “using the expected offset in a kinematic feedback loop to correct the actual position of the object under observation so the expected offset aligns in time with an implementation of a control command to the object under observation using the expected offset.” Also, while Woodside teaches interpolating the data points ([0051]: “Interpolate a leading measurement, T ~ , at t ~ from the leading measurement data, T 1 and T 2 , corresponding to the timestamps, t 1 and t 2 , by, T ~ = f i n t T 1 , t 1 , T 2 , t 2 ,   t ~   where f i n t   .   .   .   : ☐ 4 × 4 → ☐ 4 × 4 is the homogenous transformation interpolation function defined in the appendix”), Woodside does not explicitly teach “extrapolating data points within at least one of the plurality of data streams to provide additional data points for synchronization and comparison.” Finally, while Woodside teaches filtering the kinematic error estimate ([0093]: “To provide a single metric for each increase in the robot's corrected kinematic error, the spatial components of the corrected positional kinematic error were filtered independently using a zero-phase 6th order Butterworth filter with cutoff frequencies ranging between 0.1 Hz and 0.5 Hz”), Woodside does not explicitly teach “filtering at least one of the plurality of data streams.” Duan further teaches determining an expected offset caused by at least one of calculation latency or machine-implementation latency by projecting the object positional data stream and the single integrated data into a future time (Pages 5-6, Delay compensation: “As the delay time k ∆ t exists, the predicted human joint angle position y ^ ( t + k ∆ t ) at the time of k ∆ t ahead is sent to the actuator to achieve compensation in each controlling cycle as demonstrated in Figure 8”), using the expected offset in a kinematic feedback loop to correct the actual position of the object under observation so the expected offset aligns in time with an implementation of a control command to the object under observation using the expected offset (Pages 5-6, Delay compensation: “The actual output of the corresponding robot joint angle position y r ( t ) is the compensated result”). Duan does not explicitly teach “extrapolating data points within at least one of the plurality of data streams to provide additional data points for synchronization and comparison, and filtering at least one of the plurality of data streams.” Alt further teaches extrapolating data points within at least one of the plurality of data streams to provide additional data points for synchronization and comparison ([0133]: “The post-processor adapts the rates accordingly using common signal processing techniques like 'extrapolation' or 'hold last sample.' Position signals may be extrapolated linearly, based on the velocity readings”), and filtering at least one of the plurality of data streams ([0132]: “Filtering of vision-based signals: The noise of signals obtained by the computing unit varies, depending on viewing conditions and the conditioning of the IK process. Noise is of special relevance for velocity signals derived from visual measurements. A Kalman filter, or similar, is employed for de-noising and estimation of the correct signal value”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt the method of Woodside in view of Duan to incorporate the teachings of Alt so as to include determining an expected offset caused by at least one of calculation latency or machine-implementation latency by projecting the object positional data stream and the single integrated data into a future time, using the expected offset in a kinematic feedback loop to correct the actual position of the object under observation so the expected offset aligns in time with an implementation of a control command to the object under observation using the expected offset, extrapolating data points within at least one of the plurality of data streams to provide additional data points for synchronization and comparison, and filtering at least one of the plurality of data streams. Doing so would allow robustness against measurement failure with the goal of acquiring accurate data ([0111]: “The described methods according to embodiments are designed to acquire visual data in a redundant fashion. This approach allows for noise reduction by fusion of multiple measurements and for robustness against failure of some measurements. Measurement failures of individual features may occur due to a number of reasons”). Regarding claim 7, Woodside in view of Duan and Alt teaches the method of claim 6. Woodside and Duan do not explicitly teach “wherein the filtering comprises using Kalman Filtering to produce the single integrated data stream.” Alt further teaches wherein the filtering comprises using Kalman Filtering to produce the single integrated data stream ([0132]: “Filtering of vision-based signals: The noise of signals obtained by the computing unit varies, depending on viewing conditions and the conditioning of the IK process. Noise is of special relevance for velocity signals derived from visual measurements. A Kalman filter, or similar, is employed for de-noising and estimation of the correct signal value”). Regarding claim 8, Woodside in view of Duan and Alt teaches the method of claim 6. Woodside further teaches wherein the generated data streams are related to a position of the object under observation ([0010]: “a laser tracking measuring system, having a 6 DoF sensor carried by the end effector is utilized to determine the actual position and orientation of the end effector as it is moved toward its desired position and orientation with this FIG. 2 illustrating the transformational relationships that are used to define kinematic and measured position and orientation of the 6 DoF sensor with respect to the robot's base frame”). Regarding claim 9, Woodside in view of Duan and Alt teaches the method of claim 6. Woodside further teaches wherein another sensor is positioned on at least one of the plurality of sensors configured for observing the object under observation ([0033]: “The 6 DoF sensor houses several orientation sensors and a retro reflector which are used to measure its orientation and position, respectively. More specifically the position of the 6 DoF sensor is measured by the laser tracker and the orientation of the 6 DoF sensor is measured by the sensor itself and transmitted to the tracker”). Regarding claim 10, Woodside in view of Duan and Alt teaches the method of claim 6. Woodside further teaches wherein the receipt, processing, comparison, and feedback loop are provided in real time to provide an updated kinematic correction to the object under observation to update an object position during use ([0003]: “dynamically compensating for errors in the position and orientation of a robot end effector utilizing a kinematic error observer algorithm,” where dynamic compensation corresponds to real time position updates). Regarding claim 11, Woodside in view of Duan teaches the method of claim 4. Woodside further teaches wherein at least one of the single integrated data stream and the object positional data stream is synchronized before being compared ([0040]: “As mentioned in [0034] the robot and tracker measurements may be unsynchronized. Lack of synchronicity of the measurements will result in both a relative time delay between the two clock signals and jitter in each clock signal's timing. Each of these issues are addressed independently in the algorithmic procedure discussed below”; The positional data stream may be synchronized by matching “the lagging and interpolated leading measurements for the kth control iteration by, ( T r b [ k ] , T s b [ k ] , t k [ k ] ) = T r b , T ~ , t r       τ = 1   T ~ , T s b , t s         τ = 0 ,” as supported by [0052]). While Woodside teaches interpolating the data points ([0051]: “Interpolate a leading measurement, T ~ , at t ~ from the leading measurement data, T 1 and T 2 , corresponding to the timestamps, t 1 and t 2 , by, T ~ = f i n t T 1 , t 1 , T 2 , t 2 ,   t ~   where f i n t   .   .   .   : ☐ 4 × 4 → ☐ 4 × 4 is the homogenous transformation interpolation function defined in the appendix”), Woodside and Duan do not explicitly teach “wherein at least one of the single integrated data stream and the object positional data stream is extrapolated to provide additional data points for comparison.” Additionally, while Woodside teaches filtering the kinematic error estimate ([0093]: “To provide a single metric for each increase in the robot's corrected kinematic error, the spatial components of the corrected positional kinematic error were filtered independently using a zero-phase 6th order Butterworth filter with cutoff frequencies ranging between 0.1 Hz and 0.5 Hz”), Woodside and Duan do not explicitly teach “wherein at least one of the single integrated data stream and the object positional data stream is filtered.” Alt further teaches wherein at least one of the single integrated data stream and the object positional data stream is extrapolated to provide additional data points for comparison ([0133]: “The post-processor adapts the rates accordingly using common signal processing techniques like 'extrapolation' or 'hold last sample.' Position signals may be extrapolated linearly, based on the velocity readings”), and filtered ([0132]: “Filtering of vision-based signals: The noise of signals obtained by the computing unit varies, depending on viewing conditions and the conditioning of the IK process. Noise is of special relevance for velocity signals derived from visual measurements. A Kalman filter, or similar, is employed for de-noising and estimation of the correct signal value”), before being compared. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt the method of Woodside in view of Duan to incorporate the teachings of Alt so as to include wherein at least one of the single integrated data stream and the object positional data stream is extrapolated to provide additional data points for comparison and filtered before being compared. Doing so would allow robustness against measurement failure with the goal of acquiring accurate data ([0111]: “The described methods according to embodiments are designed to acquire visual data in a redundant fashion. This approach allows for noise reduction by fusion of multiple measurements and for robustness against failure of some measurements. Measurement failures of individual features may occur due to a number of reasons”). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Woodside et al. (US 2023/0075352 A1), in view of Duan, and in view of Stuhldreher et al. (US 11,636,382 B1). Regarding claim 12, Woodside in view of Duan teaches the method of claim 4. While Woodside teaches using a processor for processing and comparison of the data streams ([0027-0028]), Woodside and Duan do not explicitly teach “wherein data streams are provided directly into one or more high-speed programmable logic controller (PLC) processors for processing and comparison with the object positional data stream.” Stuhldreher, in the analogous field of robotics, teaches wherein data streams are provided directly into one or more high-speed programmable logic controller (PLC) processors for processing and comparison with the object positional data stream (Col. 2, Lines 61-63: controller 120 may be one or more programmable logic controllers (PLCs)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adapt the method of Woodside in view of Duan to incorporate the teachings of Stuhldreher so as to include wherein data streams are provided directly into one or more high speed PLC processors for processing and comparison with the object positional data stream. As set forth in MPEP § 2143, as established by Stuhldreher, there are a number of different methodologies to implement a controller, and choosing a particular method from a finite number of identified, predictable solutions is considered an obvious modification. As Stuhldreher discloses a number of alternatives, including using one or more programmable logic controller (PLC) processors, one of ordinary skill in the art would have been motivated to implement any of these known controllers with Woodside and Duan, as they would all have a predictable result with a reasonable expectation of success. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 8,774,950 B2: Compensates for latency by determining a predicted state of the apparatus at a time in the future corresponding to the estimated latency for a control signal to be received and implemented by the apparatus Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Magdalena Kossek whose telephone number is (571)272-5603. The examiner can normally be reached Mon-Fri 9:00-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Fennema can be reached on (571)272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.I.K./Examiner, Art Unit 2117 /ROBERT E FENNEMA/Supervisory Patent Examiner, Art Unit 2117
Read full office action

Prosecution Timeline

Mar 28, 2023
Application Filed
Aug 19, 2025
Non-Final Rejection — §103, §112
Jan 20, 2026
Response Filed
Feb 25, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588661
POULTRY AND LIVESTOCK FEEDING AND MONITORING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12566424
CUTTING SUPPORT APPARATUS, CUTTING PATTERN GENERATION METHOD, AND CUTTING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12520760
SYSTEM FOR CONTROLLING OPERATING CLEARANCE BETWEEN A CONCAVE ASSEMBLY AND A CROP PROCESSING ROTOR
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+40.0%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month