Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,815

METHOD FOR DETERMINING A RELATIVE MOUNTING POSITION OF A FIRST SENSOR UNIT IN RELATION TO A SECOND SENSOR UNIT ON AN INDUSTRIAL TRUCK

Non-Final OA §103
Filed
Dec 22, 2022
Examiner
NOEL, JEMPSON
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jungheinrich Aktiengesellschaft
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
88 granted / 136 resolved
+12.7% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
178
Total Applications
across all art units

Statute-Specific Performance

§101
0.3%
-39.7% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is the first office action on the merits and is responsive to the papers filed 12/22/2022. Claims 1-20 have been examined. Information Disclosure Statement The information disclosure statement submitted by Applicant is in compliance with the provision of 37 CFR 1.97, 1.98 and MPEP § 609. It has been placed in the application file and the information referred to therein has been considered as to the merits. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chung et al. (US 20210103040 A1, “Chung”). Regarding claim 1, Chung teaches a method for determining a relative mounting position of a first sensor unit relation to a second sensor unit on an industrial truck (Para 3, 22, 52), comprising: placing the industrial truck in a first orientation with respect to a planar structure (Chung teaches collecting point clouds in real environments and using planes derived from surrounding structures (ground, walls, pillars) as calibration features. ([0043], [0057]). Using such planes necessarily corresponds to the platform having some pose/orientation relative to the planar structure at time of capture (a “first orientation”),), wherein the industrial truck has a length direction, a width direction, and a height direction (Chung discloses a vehicle/robot platform on which multiple 3D LiDAR sensors are mounted ([0052]). Chung further defines sensor coordinate systems and determines a full three-dimensional rigid-body transformation between the sensors, including three rotational components and three translational components tx​, ty​, tz​ ([0053]–[0056]). A three-dimensional rigid-body transformation inherently corresponds to three orthogonal spatial directions (longitudinal, lateral/width, and vertical/height directions) of the vehicle/platform.); detecting the planar structure using the first sensor unit and the second sensor unit and determining, for each of the first sensor unit and the second sensor unit, a distance between the respective sensor unit and the planar structure in the first orientation, wherein the first sensor unit and the second sensor unit each have a respective detection field and supply respective sensor data (Chung detects/extracts target planes from each LiDAR’s point cloud and represents planes via parameters including normal vector and distance from origin (plane offset), and uses these in matching. ([0028], [0062], [0073]–[0075]). Thus, for each sensor, Chung determines plane parameters that include a distance term (distance from origin) for the detected planar structure. ([0028], [0073]–[0075])); Chung explains that the LiDAR sensors may be installed at different locations and can have significantly different points of view / pose, such that the observed planes can differ in size and position even for the same plane ([0018]-[0019], [0070]-[0071], [0096]). Chung further teaches that, to obtain a unique solution for the relative transformation between the two LiDAR sensors, “three or more non-parallel planes must exist” ([0085]). However, Chung fails to explicitly teach placing the industrial truck in a second orientation with respect to the planar structure, wherein at least an angle between the longitudinal axis of the industrial truck and the planar structure differs between the first orientation and the second orientation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Chung by reorienting the vehicle relative to the planar structure during data collection in order to obtain additional non-parallel plane constraints and improve robustness of the calibration solution, since such reorientation predictably increases geometric diversity and satisfies the rank condition required by Chung [0085]. detecting the planar structure using the first sensor unit and the second sensor unit and determining, for each of the first sensor unit and the second sensor unit, a respective distance between the respective sensor unit and the planar structure in the second orientation (For each observation of planar structures, Chung detects/extracts planes from point clouds (candidate/target planes) and represents planes using plane parameters including a normal vector and a distance from an origin (plane offset) ([0028]–[0032].Chung further teaches optimizing extrinsic parameters using a point-to-plane residual that explicitly includes the plane distance term diR​ (equation 13), thereby showing that the planar structure is detected and a corresponding distance/offset parameter is determined and used for calibration ([0089]). Accordingly, for the additional (second) orientation/observation used to obtain plane constraints, Chung teaches detecting the planar structure using both sensors’ point clouds and determining a respective distance/offset to the planar structure for each sensor via the extracted plane parameters and associated distance term ([0089] … to calculate the variance of the measurement points for each of the reference plane and the corresponding plane.).); and deriving the offset of the first sensor unit and the second sensor unit with respect to the length direction and width direction the angle between the two sensor units the first sensor unit and the second sensor unit with respect to same spatial axis (Chung derives the relative transformation between sensor frames as extrinsic parameters including a rotation transformation matrix and translation vector between {R} and {S}. ([0053]–[0056]). The translation vector corresponds to offset components along axes (which would map to truck length/width/height axes when the sensor frames are defined relative to the truck). The rotation corresponds to the angle/orientation between the sensors about one or more axes. ([0054]–[0056]; Fig. 1 A)). Regarding claim 2, Chung teaches the method of claim 1, wherein one of the first sensor unit and the second sensor unit units is a laser scanner and the other of the first sensor unit and the second sensor unit is a three-dimensional (3D) area sensor (Chung teaches multiple 3D LiDAR sensors. A LiDAR is a laser scanning ranging sensor and thus reads on “laser scanner,” and a 3D LiDAR provides 3D sensing data (i.e., a 3D area/depth sensor). ([0024], [0052])). Regarding claim 3, Chung teaches the method of claim 1, wherein the sensor data supplied by the sensor units first sensor unit and the second sensor unit is transformed and mapped in same coordinate system (Chung explicitly transforms data between sensor coordinate systems using extrinsic parameters: define reference frame {R}, calibration sensor frame {S}, and determine rotation/translation between them so point clouds match. ([0053]–[0056], [0100]–[0101]); Figs. 1A-1B). Regarding claim 4, Chung teaches the method of claim 1 , further comprising performing multiple times in at least one further orientation, steps of: placing the industrial truck in a second orientation with respect to the planar structure, wherein at least an angle between the longitudinal axis of the industrial truck and the planar structure differs between the first orientation and the second orientation; and detecting the planar structure using the first sensor unit and the second sensor unit and determining, for each of the first sensor unit and the second sensor unit, a respective distance between the respective sensor unit and the planar structure in the second orientation. Chung discloses iterative/repetitive plane matching and repeated calculation until convergence of matching error. ([0039], [0086]–[0087], [0091]). This corresponds to performing the calibration using multiple observations/poses/orientations (further orientations) until convergence. Regarding claim 5, Chung teaches the method of claim 1, wherein the step of deriving comprises a calculation (Chung derives extrinsic parameters via explicit calculations (Kabsch/SVD, least squares, LM). ([0035]–[0038], [0081]–[0085], [0091]–[0094]); Fig. 5). Regarding claim 6, Chung teaches the method of claim 4, wherein the calculation comprises performing a quadratic optimisation function (Chung minimizes squared error/cost functions (sum of squared residuals), including least squares and nonlinear least squares. ([0084]–[0085], [0040]–[0042], [0088]–[0094])). Regarding claim 7, Chung teaches the method of claim 1, wherein, in one or more of the first orientation, the second orientation, or at least one further orientation, the planar structure lies at least partially within an overlapping area of the detection fields of the first sensor unit and the second sensor unit (Fig. 1A). Regarding claim 8, Chung teaches the method of claim 1, wherein the planar structure is formed substantially vertically (Chung uses planes from walls/pillars and indoor/outdoor structures; walls/pillars are vertical planar structures. ([0043], [0057])). Regarding claim 9, Chung teaches the method of claim 1, wherein the planar structure comprises at least two surface portions arranged at an angle to one another (Chung’s disclosure relies on multiple non-parallel planes (at least three non-parallel planes for unique solution), [0085]). Claim 10 recites a system with a data processing unit configured to perform essentially the same operations as claim 1. Chung discloses an autonomous navigation system/robot platform with multiple LiDARs and a method performed computationally: collecting point clouds, extracting planes, matching planes, computing rotation/translation, and optimizing parameters—i.e., operations performed by a processing unit implementing the algorithm. ([0024]–[0026], [0057]–[0063], [0086]–[0088]) Thus, claim 10 is obvious for the same reasons as claim 1. Claims 11-13, 14, 15, 16 are system claims corresponding to method claims 2-4, 20, 7, 9. They are rejected for the same reasons. Regarding claim 17, Chung teaches the method of claim 1, further comprising determining a first angle between the first sensor unit and the planar structure and a second angle between the second sensor unit and the planar structure (Chung uses plane normal and explicitly computes an angle between planes as a similarity index. The angle between a sensor-observed plane and a reference can be determined from the normal vector in that sensor’s frame; thus for each sensor, the angular relationship to the plane is determined/derivable. ([0028]–[0032], [0073]–[0075])). Regarding claim 18, Chung teaches the method of claim 1, wherein the same spatial axis comprises the height direction (Chung’s extrinsic parameters include 3D rotation/translation components; the translation includes a vertical component (tz) and rotations include pitch/roll, which correspond to height-axis relationships when axes are aligned to the vehicle. ([0054]–[0056], [0090])). Regarding claim 19, Chung fails to explicitly teach the method of claim 2, wherein the 3D area sensor comprises a time of flight sensor. Chung discloses a 3D LiDAR sensor mounted on a vehicle platform for measuring distances and generating environmental point clouds in an autonomous navigation system (Chung [0004], [0052]). However, Chung does not explicitly teach that the LiDAR sensor operates using a time-of-flight ranging mechanism. It would have been obvious to one of ordinary skill in the art that to implement Chung’s 3D LiDAR sensor using a time-of-flight architecture, because vehicle-mounted navigation requires accurate long-range distance measurement and robust outdoor operation, for which time-of-flight ranging is more suitable approach. Regarding claim 20, Chung teaches the method of claim 5, wherein the calculation comprises an averaging, a geometric reconstruction, or an execution of an optimisation function (Chung performs geometric reconstruction via plane extraction/plane parameters and executes optimization (least squares; Levenberg-Marquardt) to solve extrinsic parameters. ([0058]–[0066], [0088]–[0094])). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Castaneda et al. (US 20110166721 A1), teaches Object tracking and steer maneuvers for maneuvers for materials handling vehicles. Shuqing Zeng (US 20100191391 A1), teaches Multiobject Fusion Module for Collision preparation system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEMPSON NOEL whose telephone number is (571) 272-3376. The examiner can normally be reached on Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached on (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEMPSON NOEL/Examiner, Art Unit 3645 /YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Mar 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601814
OPTOELECTRONIC DEVICE AND LIDAR SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12591062
LIDAR DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12591044
LIGHT PROJECTION APPARATUS AND MOVING BODY
2y 5m to grant Granted Mar 31, 2026
Patent 12560691
PHOTONIC INTEGRATED CIRCUIT, LIGHT DETECTION AND RANGING SYSTEM AND VEHICLE HAVING THE SAME
2y 5m to grant Granted Feb 24, 2026
Patent 12541007
SPATIAL LIGHT MODULATOR AND LIDAR APPARATUS INCLUDING THE SAME
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+36.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month