Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,565

SIMULTANEOUS MAP AND DYNAMIC OBJECT RECONSTRUCTION FROM LIDAR

Non-Final OA §103
Filed
Aug 01, 2024
Examiner
GOCO, JOHN PATRICK
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Carnegie Mellon University
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
68.8%
+28.8% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 4. Claim(s) 1-7, 9-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20190266779 A1 (Kulkarni et al, hereinafter Kulkarni) in view of US 20240353234 A1 (Mitrokhin et al, hereinafter Mitrokhin). Regarding claim 1, Kulkarni teaches A method for reconstructing a dynamic scene using LIDAR (Light Detection and Ranging) data, the method comprising: (Par 17 “Various types of data collection technologies or systems can be used, in different combinations, to periodically collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space”) generating, using a LIDAR system implemented on a vehicle point cloud data for an environment including a plurality of objects including static and dynamic objects, wherein the point cloud data comprises a plurality of points in a three-dimensional space; (Par 26 “For example, the LiDAR system can collect two different point clouds at 0.6 seconds apart. The vehicle can be traveling at some speed on a road. Because the vehicle is moving, the two sets of point cloud data may not align. The system needs to know if the system is viewing a new object or the same object that has moved a little, relative to the vehicle. The ICP algorithm can be used to roughly align the data points in the two point clouds so the system can identify the same objects in both point clouds”, Par 27 “The above mentioned calculation on how the other object is moving can be relative to how the vehicle with the LiDAR system is moving.”) estimating a position and orientation for one or more objects of the plurality of objects within each of the first and second annotated frames (Par 26 “The ICP algorithm can be used to roughly align the data points in the two point clouds so the system can identify the same objects in both point clouds. The system can calculate the distance between matched points; matched points are corresponding points that exist in the first and second point clouds. The system can then calculate if that other object is moving, how fast, and in which direction. From that information, the vehicle can determine what next steps and actions to process.”) transforming global-referenced coordinates to vehicle-referenced coordinates for each of the one or more objects (Par 41 “Rather than mapping directly from point cloud data to a 2D depth map, point cloud data, i.e., 3D coordinates (x, y, z), are being mapped to a 3D polar depth map, i.e., polar coordinates (r, Θ, Φ). The collected 3D data points, collected by a detection and ranging sensor, such as a LiDAR or RADAR system, can be stored as the 3D polar depth map. These coordinates, for a LiDAR system, are defined as: [0042] x is to the left of the forward/origin position of the LiDAR system [0043] y is the forward depth as compared to the origin at the LiDAR system [0044] z is the upward position as compared to the origin at the LiDAR system [0045] r is the radial line [0046] Θ is the azimuth as measured from the plane containing the x and y directions of the LiDAR system and rotates in a clockwise motion [0047] Φ is the polar angle as measured from the forward position of the LiDAR system.”) transforming, for each of the one or more objects and using the plurality of intermediate frames, respective object-referenced coordinates to vehicle-referenced coordinates (Par 57 “When mapping point cloud data into a 2D depth map, the point cloud data is transformed. Rather than precise coordinates for objects 412 and 414, the depth map translates object location information to corresponding data points in the 2D depth map.”) performing a first optimization to a mesh of the three-dimensional space, wherein, during the first optimization, the mesh of the three-dimensional space is dynamic and respective positions and orientations of the one or more objects are fixed; (Par 29 “Trajectory calculations can use relative motion identified, as an object, such as a vehicle, moves from an identified data point plane to another plane. For example, ICP produces the transforms between two point clouds, which when inversed, generates the trajectory that the vehicle travelled.”) performing a second optimization to the respective positions and orientations of the one or more objects, wherein, during the second optimization, the mesh of the three-dimensional space is fixed and the respective positions of the one or more objects are dynamic (Par 26 “The ICP algorithm can be used to roughly align the data points in the two point clouds so the system can identify the same objects in both point clouds. The system can calculate the distance between matched points; matched points are corresponding points that exist in the first and second point clouds. The system can then calculate if that other object is moving, how fast, and in which direction.”) reconstructing the dynamic scene by performing the first and second optimizations until convergence (Par 26 “The ICP algorithm can be used to roughly align the data points in the two point clouds so the system can identify the same objects in both point clouds.” where ICP (iterative closest point) is an algorithm repeated until a condition is met, in this case aligning the data points.) Regarding claim 1, Kulkarni fails to explicitly teach annotating a plurality of frames based on the point cloud data, wherein the annotated frames include a first annotated frame and a second annotated frame, wherein the first and second annotated frames correspond to point cloud data generated at first and second instances of time, respectively. In related endeavor, Mitrokhin teaches annotating a plurality of frames based on the point cloud data, wherein the annotated frames include a first annotated frame and a second annotated frame, wherein the first and second annotated frames correspond to point cloud data generated at first and second instances of time, respectively (Par 58 “For example, a first point cloud represented by the first sensor data 102 (e.g., a first instance of the first sensor data 102) may represent the dynamic object at a first location at a first time, a second point cloud represented by the first sensor data 102 (e.g., a second instance of the first sensor data 102) may represent the dynamic object at a second location at a second time, a third point cloud represented by the first sensor data 102 (e.g., a third instance of the first sensor data 102) may represent the dynamic object at a third location at a third time, and/or so forth. As such, the tracking component 122 may use the annotations associated with the point clouds of the first sensor data 102 to identify the points from the point clouds that are associated with the dynamic object. The tracking component 122 may then use the points from the point clouds to generate a track for the dynamic object over the period of time”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Kulkarni to include annotating a plurality of frames based on the point cloud data, wherein the annotated frames include a first annotated frame and a second annotated frame, wherein the first and second annotated frames correspond to point cloud data generated at first and second instances of time, respectively, as taught by Mitrokhin. Doing so would allow an object to be tracked across point clouds (Par 58 “use the annotations associated with the point clouds of the first sensor data 102 to identify the points from the point clouds that are associated with the dynamic object”). Regarding claim 1, Kulkarni fails to explicitly teach generating, using the first and second annotated frames, a plurality of intermediate frames indicative of respective positions and orientations of the one or more objects between first and second instances of time. In related endeavor, Mitrokhin teaches generating, using the first and second annotated frames, a plurality of intermediate frames indicative of respective positions and orientations of the one or more objects between first and second instances of time (Par 61 “the mapping component 120 may use interpolation to determine one or more of the locations 502(1)-(4) associated with the dynamic object 208(4). For example, the mapping component 120 may process a first point cloud to determine the second location 502(2) of the dynamic object 208(4) and a second point cloud to determine the fourth location 502(4) of the dynamic object 208(4). The mapping component 120 may then perform interpolation to determine the third location 502(3) of the dynamic object 208(4) using the second location 502(2) and the fourth location 502(4)”, Par 62 “The mapping component 120 may then perform similar processes to track one or more other dynamic objects”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Kulkarni to include generating, using the first and second annotated frames, a plurality of intermediate frames indicative of respective positions and orientations of the one or more objects between first and second instances of time as taught by Mitrokhin. Doing so would allow objects to be tracked as they move throughout the environment (Par 62 “the tracking component 122 may perform similar processes to track the dynamic object 208(5) as the dynamic object 208(5) moves throughout the environment”). Regarding claim 2, Kulkarni as modified by Mitrokhin teaches the method of claim 1. Kulkarni further teaches wherein the LIDAR system comprises a rotating LIDAR sensor (Par 39 “Each rotation of the LiDAR sensor, in conjunction with motion, i.e., trajectory between two time points, can create an overlap of point cloud data points, which in turn can indicate movement of the vehicle or of an external object.”) Regarding claim 3, Kulkarni as modified by Mitrokhin teaches the method of claim 2 Kulkarni fails to explicitly teach wherein the first annotated frame comprises point cloud data generated by the LIDAR sensor when pointing in a particular direction at the first instance of time, and wherein the second annotated frame comprises point cloud data generated by the LIDAR sensor when pointing in the particular direction at the second instance of time, wherein the second instance of time is subsequent to the first instance of time. In related endeavor, Mitrokhin teaches wherein the first annotated frame comprises point cloud data generated by the LIDAR sensor when pointing in a particular direction at the first instance of time, and wherein the second annotated frame comprises point cloud data generated by the LIDAR sensor when pointing in the particular direction at the second instance of time, wherein the second instance of time is subsequent to the first instance of time. (Par 45 “since the first sensor(s) is rotating when generating the first sensor data, the points associated with the environment 204 may also be associated with timestamps indicating when the points were generated. For example, the first point(s) associated with the static object 208(1) may be associated with a first timestamp(s) indicating a first time(s) that the first point(s) was generated, the second point(s) associated with the static object 208(2) may be associated with a second timestamp(s) indicating a second time(s) that the second point(s) was generated”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Kulkarni to include wherein the first annotated frame comprises point cloud data generated by the LIDAR sensor when pointing in a particular direction at the first instance of time, and wherein the second annotated frame comprises point cloud data generated by the LIDAR sensor when pointing in the particular direction at the second instance of time, wherein the second instance of time is subsequent to the first instance of time as taught by Mitrokhin. Doing so would allow the time between frames depicting the same direction to be indicated (Par 45 “the points associated with the environment 204 may also be associated with timestamps indicating when the points were generated”) Regarding claim 4, Kulkarni as modified by Mitrokhin teaches the method of claim 3. Kulkarni fails to explicitly teach wherein each of the plurality of intermediate frames represent estimated positions and orientations of the one or more objects between the first and second instances of time, when the LIDAR sensor is not pointing in the particular direction. In related field of endeavor, Mitrokhin teaches wherein each of the plurality of intermediate frames represent estimated positions and orientations of the one or more objects between the first and second instances of time, when the LIDAR sensor is not pointing in the particular direction (Par 61 “Additionally, or alternatively, in some examples, the mapping component 120 may use interpolation to determine one or more of the locations 502(1)-(4) associated with the dynamic object 208(4). For example, the mapping component 120 may process a first point cloud to determine the second location 502(2) of the dynamic object 208(4) and a second point cloud to determine the fourth location 502(4) of the dynamic object 208(4). The mapping component 120 may then perform interpolation to determine the third location 502(3) of the dynamic object 208(4) using the second location 502(2) and the fourth location 502(4).”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Kulkarni to include wherein each of the plurality of intermediate frames represent estimated positions and orientations of the one or more objects between the first and second instances of time, when the LIDAR sensor is not pointing in the particular direction as taught by Mitrokhin. Doing so would allow dynamic objects to be tracked as they move through the environment (Par 62 “may perform similar processes to track the dynamic object 208(5) as the dynamic object 208(5) moves throughout the environment”) Regarding claim 5, Kulkarni as modified by Mitrokhin teaches the method of claim 1. Kulkarni further teaches further comprising generating meshes for one or more moving objects and generating meshes for one or more non-moving objects (Par 50 “The point cloud data can include data points representing different features and artifacts in the geometric space around the vehicle. In this example, a landmark detection is represented by a road sign 128, a freespace detection is represented by open space 124, and an object detection is represented by a second vehicle 126. These elements, 124, 126, and 128, are collectively referred to as the geometric space parameters. Building 122 can also be part of the geometric space parameters. In addition, the geometric space parameters include object trajectories and moving objects. These are the parameters of areas and objects around, or partially around, the LiDAR system 105a and are used in further processing systems.”) Regarding claim 6, Kulkarni as modified by Mitrokhin teaches the method of claim 5. Kulkarni further teaches further comprising generating the meshes for the one or more moving objects based on a constant velocity of the moving objects. (Par 50 “The point cloud data can include data points representing different features and artifacts in the geometric space around the vehicle. In this example, a landmark detection is represented by a road sign 128, a freespace detection is represented by open space 124, and an object detection is represented by a second vehicle 126. These elements, 124, 126, and 128, are collectively referred to as the geometric space parameters. Building 122 can also be part of the geometric space parameters. In addition, the geometric space parameters include object trajectories and moving objects. These are the parameters of areas and objects around, or partially around, the LiDAR system 105a and are used in further processing systems.”) Regarding claim 7, Kulkarni as modified by Mitrokhin teaches the method of claim 5. Kulkarni further teaches further comprising determining point-to-mesh registration for the plurality of points using an iterative closest point method to minimize a difference between two different point clouds of the point cloud data (Par 26 “As noted above, another algorithm that can be used to process point cloud data is the ICP. This algorithm can be used to roughly align two point clouds. For example, the LiDAR system can collect two different point clouds at 0.6 seconds apart. The vehicle can be traveling at some speed on a road. Because the vehicle is moving, the two sets of point cloud data may not align. The system needs to know if the system is viewing a new object or the same object that has moved a little, relative to the vehicle. The ICP algorithm can be used to roughly align the data points in the two point clouds so the system can identify the same objects in both point clouds. The system can calculate the distance between matched points; matched points are corresponding points that exist in the first and second point clouds. The system can then calculate if that other object is moving, how fast, and in which direction. From that information, the vehicle can determine what next steps and actions to process.”). Regarding claim 9, Kulkarni as modified by Mitrokhin teaches the method of claim 1. Kulkarni further teaches wherein repeating performing the first and second optimizations until convergence comprises performing the first and second optimizations until an error metric is less than an error threshold (Par 67 “The resulting output of the ICP algorithm is a six degree of freedom transform M that transforms the source points such that the point-plane error between their corresponding target points is a minimum, i.e., the sum of the squared distance between each source point and the tangent plane at its corresponding destination point is a minimum”). Regarding claim 10, the system claim 10 is similar in scope to the method claim 1, and is rejected under similar rationale. Regarding claim 11, the system claim 11 is similar in scope to the method claim 2, and is rejected under similar rationale. Regarding claim 12, the system claim 12 is similar in scope to the method claim 3, and is rejected under similar rationale. Regarding claim 13, the system claim 13 is similar in scope to the method claim 4, and is rejected under similar rationale. Regarding claim 14, the system claim 14 is similar in scope to the method claim 5, and is rejected under similar rationale. Regarding claim 15, the system claim 15 is similar in scope to the method claim 6, and is rejected under similar rationale. Regarding claim 16, the system claim 16 is similar in scope to the method claim 7, and is rejected under similar rationale. Regarding claim 18, the non-transitory computer readable medium claim 18 is similar in scope to the method claim 1 and is rejected under similar rationale. Regarding claim 19, the non-transitory computer readable medium claim 19 is similar in scope to the method claim 4 and is rejected under similar rationale. Regarding claim 20, Kulkarni as modified by Mitrokhin teaches the computer readable medium of claim 18. Kulkarni further teaches generate meshes for one or more moving objects and generating meshes for one or more non-moving objects, wherein generating the meshes for the one or more moving objects is based on a constant velocity of the moving objects (Par 50 “The point cloud data can include data points representing different features and artifacts in the geometric space around the vehicle. In this example, a landmark detection is represented by a road sign 128, a freespace detection is represented by open space 124, and an object detection is represented by a second vehicle 126. These elements, 124, 126, and 128, are collectively referred to as the geometric space parameters. Building 122 can also be part of the geometric space parameters. In addition, the geometric space parameters include object trajectories and moving objects. These are the parameters of areas and objects around, or partially around, the LiDAR system 105a and are used in further processing systems.”) and determine point-to-mesh registration for the plurality of points using an iterative closest point method to minimize a difference between two different point clouds of the point cloud data (Par 26 “As noted above, another algorithm that can be used to process point cloud data is the ICP. This algorithm can be used to roughly align two point clouds. For example, the LiDAR system can collect two different point clouds at 0.6 seconds apart. The vehicle can be traveling at some speed on a road. Because the vehicle is moving, the two sets of point cloud data may not align. The system needs to know if the system is viewing a new object or the same object that has moved a little, relative to the vehicle. The ICP algorithm can be used to roughly align the data points in the two point clouds so the system can identify the same objects in both point clouds. The system can calculate the distance between matched points; matched points are corresponding points that exist in the first and second point clouds. The system can then calculate if that other object is moving, how fast, and in which direction. From that information, the vehicle can determine what next steps and actions to process.”). 5. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kulkarni in view of Mitrokhin as applied to claim 1 above, and further in view of LIDAR Data Registration for Unmanned Ground Vehicle Based on Improved ICP Algorithm (Zhongyang Zheng, Yan Li, hereinafter Zheng). Regarding claim 8, Kulkarni as modified by Mitrokhin fail to explicitly teach wherein repeating performing the first and second optimizations until convergence comprises repeating the first and second optimizations for a predetermined number of iterations. In related field of endeavor, Zheng teaches limiting the number of iterations of a convergence optimization to a predetermined number (Sect III, Par 3 “The second experiment is under the condition of not limiting the convergence value and limiting the iteration numbers in 30 times.”) It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to further modify Kulkarni in view of Mitrokhin to include limiting the number of iterations of a convergence optimization to a predetermined number. Doing so would limit the amount of time the convergence optimizations can take. Furthermore, MPEP 2144.05 Section II.A. states “[W]here the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation.” An optimal number of iterations for an optimization could be discovered through routine experimentation, and applied as the predetermined number of iterations described in claim 8. Regarding claim 17, the system claim 17 is similar in scope to the method claim 8 and is rejected under similar rationale. Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN PATRICK GOCO whose telephone number is (571)272-5872. The examiner can normally be reached M-Th, 7:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P GOCO/ Examiner, Art Unit 2619 /JASON CHAN/ Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Feb 23, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month