Prosecution Insights
Last updated: April 19, 2026
Application No. 18/306,177

COMPRESSION OF LIDAR POINTCLOUD DATA ACQUIRED FROM A LIDAR SENSOR COUPLED TO AN AUTONOMOUS VEHICE USING A LOSSLESS COMPRESSION ALGORITHM

Final Rejection §102§112
Filed
Apr 24, 2023
Examiner
DICKERSON, CHAD S
Art Unit
2683
Tech Center
2600 — Communications
Assignee
GM Cruise Holdings LLC
OA Round
2 (Final)
63%
Grant Probability
Moderate
3-4
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
376 granted / 600 resolved
+0.7% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: COMPRESSION OF LIDAR POINTCLOUD DATA ACQUIRED FROM A LIDAR SESNOR COUPLED TO AN AUTONOMOUS VEHICLE USING A LOSSLESS COMPRESSION ALGORITHM. Claim Objections Claims 2, 3, 9, 10, 16 and 17 are objected to because of the following informalities: Claim 2, ll. 1: the phrase “the group” is suggested to be changed to -- a group --. This same issue is in claims 9 and 16. Claims 3, 10 and 17 are objected based on their dependency. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-14 and 16-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites the limitation "the apparatus" in ll. 1. There is insufficient antecedent basis for this limitation in the claim. This issue is present in claims 10-14 and 16-20. Claim 13 recites the limitation "the instructions further cause the processor to" in ll. 1. There is insufficient antecedent basis for this limitation in the claim. This issue is present in claim 14 as well. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a1 and/or a2) as being anticipated by Beek (US Pub 2019/0051017). Re claim 1: Beek discloses an apparatus for compressing a point cloud, comprising: a processor (e.g. a processor or CPU that is used for the compression operation, which is taught in ¶ [103].); and [0103] In various embodiments, the operations discussed herein, e.g., with reference to FIG. 1 et seq., may be implemented as hardware (e.g., logic circuitry or more generally circuitry or circuit), software, firmware, or combinations thereof, which may be provided as a computer program product, e.g., including a tangible (e.g., non-transitory) machine-readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. The machine-readable medium may include a storage device such as those discussed with respect to FIG. 1 et seq. [0104] Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals provided in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection). . a non-transitory computer-readable storage medium coupled to the processor and comprising instructions that, when loaded into the processor and executed (e.g. a tangible machine readable medium is used to store instructions of the invention to perform the features, which is taught in ¶ [103] and 104] above.), cause the processor to: receive point cloud data from a Light Detection And Ranging (LiDAR) sensor coupled to an Autonomous Vehicle (AV) (e.g. point cloud data is received from a lidar sensor, which is taught in ¶ [29]. This system is applicable to autonomous driving systems, which is taught in ¶ [18].); [0018] As will be further discussed herein (e.g., FIGS. 2-13), logic may be utilized to compress/decompress the LIDAR data to permit more efficient storage utilization as well as reducing the bandwidth requirements/use for transfer/storage of such data. This in turn would enhance safety/efficiency/performance/responsiveness for autonomous driving systems. [0029] Referring to FIGS. 1-2, operations for compression of lidar sensor data include: a) converting 204 raw distance data from the lidar sensor 202 to X,Y,Z point positions or point cloud data; b) quantizing 206 (or converting) X,Y,Z values from floating point to fixed point representation; c) packing/organizing 206 the X,Y,Z values into one or more two dimensional (2-D or 2D) arrays; and d) applying 208 an existing image compression/encoding method on the data in the 2-D array(s) such as JPEG, JPEG-LS (Joint Photographic Experts Group-Lossless) or PNG (Portable Network Graphics). The compressed data may then be store in storage 108 as discussed with reference to FIG. 1. format a portion of the received point cloud data according to a raster-graphic image file standard to create a data frame (e.g. the data is organized into one or two dimensional arrays and a compression is applied in a PNG or JPEG-LS format, which is taught in ¶ [29] above.); and compress the data frame using a lossless compression algorithm (e.g. the data is formatted into a lossless compression encoding method, which is taught in ¶ [29] above.). Re claim 2: Beek discloses the apparatus of claim 1, wherein the raster-graphic image file standard is selected from the group consisting of: Portable Network Graphics (PNG); Graphics Interchange Format (GIF); Tag Image File Format (TIFF); WebP; High Efficiency Video Coding (HEVC); High Efficiency Image Format (HEIF); Advanced Video Coding (AVC); and Joint Photographic Experts Group (JPEG) 2000 (e.g. JPEG-LS and PNG are utilized for the raster graphic file standard for compression, which is taught in ¶ [29] above.). Re claim 3: Beek discloses the apparatus of claim 2, wherein the raster graphic-image file standard is PNG (e.g. JPEG-LS and PNG are utilized for the raster graphic file standard for compression, which is taught in ¶ [29] above.). Re claim 4: Beek discloses the apparatus of claim 1, wherein: the LiDAR sensor comprises a first coordinate system (e.g. an X-Y-Z coordinate system is used within the system, which is taught in ¶ [29] above. In the invention, the first coordinate system used is the raw distance calculations.); the point cloud data comprises a plurality of point data sets respectively associated with a plurality of points in an environment surrounding the AV (e.g. x, Y, Z point positions or point cloud data are associated with points in an area around the vehicle, which is taught in ¶ [29] above.); each point data set comprises a 3D position of the associated point defined in the first coordinate system (e.g. each point data is a location within the X, Y, Z area around the vehicle, which is taught in ¶ [29] above.); a second coordinate system is fixedly defined with respect to the AV (e.g. the distance from the car to the object is defined, which is taught as the raw distance data in ¶ [29] above. In addition, the x, y, z coordinate is based on the position of the vehicle.); and a third coordinate system is fixedly defined in the environment (e.g. the azimuth angles of information around the vehicle is within another coordinate system, which is taught in ¶ [40].). [0040] (a) The conversion of raw distance values coming from the lidar sensor data to X,Y,Z point positions is dependent on the specific lidar sensor and its calibration information. For example, for a Velodyne® lidar sensor with 16-beam, 32-beam or 64-beam sensor, laser pulses are emitted at specific azimuth angles and elevation angles. These angles are specified by a sensor calibration file as well as parameters in the raw lidar data stream (e.g., such as illustrated in FIG. 2 as input to operation 204). After conversion to X,Y,Z data, this parameter data is no longer needed. Re claim 5: Beek discloses the apparatus of claim 4, wherein each point data set comprises: a distance from the LiDAR sensor to the respectively associated point (e.g. the distance from the vehicle to a point is associated with a point in the cloud, which is taught in ¶ [29] above.); and a directional parameter defining a vector from the LiDAR sensor to the respectively associated point (e.g. directions from the Lidar sensor to the object is determined by the system, which is taught in ¶ [25] and [42].). [0025] Furthermore, LIDAR can accurately measure ranges by utilizing light in the form of a pulsed laser. As discussed herein, a “laser” beam generally refers to electromagnetic radiation that is spatially and temporally coherent. Also, while some embodiments are discussed with reference to laser beams, other types of electromagnetic beams capable of detecting ranges or obstacle detection can also be used such as ultrasound, infrared, or other types of beams. LIDAR sensors typically measure ranges to objects in the environment by a process of scanning, that is, by modulating the direction of one or multiple laser beams across a range of azimuth angles and elevation angles and repeatedly taking distance/range measurements. This process of scanning the environment results in a series of distance/range measurements in a particular order, which depends on the specific scanning pattern. [0042] (c) The goal of this operation is to “pack” the X,Y,Z position values into one or more 2-D array(s) that can be directly encoded by available image compression methods in the next step. Packing the X,Y,Z position values into 2-D arrays can be relatively low overhead in the case of lidar data. A lidar scan in itself is organized already on a grid, namely across a grid of azimuth and elevation directions. For example, a scan from a Velodyne HDL-32 sensors includes 32 “rows” of data, where each “row” may contain about 1850 samples. Although the position values themselves are points in 3-D space, the scan has a grid structure. Hence, the X,Y,Z values simply can be moved to 2-D “image-like” arrays in 1-to-1 fashion without problem. A few basic options exist, similar to the options by which natural R,G,B images are stored in 2-D arrays. A first option is to pack X values into a single 2-D array, pack Y values into a 2-D array, and pack Z values into a 2-D array. This is similar to encoding the R, G, and B “channels” of a natural image into 3 arrays. A second option is to pack X, Y and Z values in interleaved manner in the same row of the same array, i.e. X00,Y00,Z00,X01,Y01,Z01,X02,Y02,Z02, etc. This is similar to the way that R, G and B values for natural images can be stored in a single array, where values are interleaved on a sample basis. A third option is to pack X, Y and Z values in the same array, but interleaving on a line basis. This means a first line contains only X values, the next line contains only Y values, the next line contains only Z values, etc. Re claim 6: Beek discloses the apparatus of claim 4, wherein the instructions further cause the processor to transform the 3D positions of the plurality of points in the point cloud data from the first coordinate system into the second coordinate system before the data frame is compressed (e.g. before performing a compression operation, the system transitions the 3D positions of the point found in the point cloud data from the raw data to the X, Y, Z coordinate system, which is taught in ¶ [29] above.). Re claim 7: Beek discloses the apparatus of claim 6, wherein the instructions further cause the processor to transform the 3D positions of the plurality of points in the point cloud data from the second coordinate system into the third coordinate system before the data frame is compressed (e.g. the system packs the x, y, z coordinate values into one or more 2D arrays before applying a compression method, which is taught in ¶ [29] above.). Re claim 8: Beek discloses a computer-implemented method for compressing a point cloud, comprising: receiving point cloud data from a Light Detection And Ranging (LiDAR) sensor coupled to an Autonomous Vehicle (AV) (e.g. point cloud data is received from a lidar sensor, which is taught in ¶ [29] above. This system is applicable to autonomous driving systems, which is taught in ¶ [18] above.); formatting a portion of the received data according to a raster-graphic image file standard to create a data frame (e.g. the data is organized into one or two dimensional arrays and a compression is applied in a PNG or JPEG-LS format, which is taught in ¶ [29] above.); and compressing the data frame using a lossless compression algorithm (e.g. the data is formatted into a lossless compression encoding method, which is taught in ¶ [29] above.). Re claim 9: Beek discloses the apparatus of claim 8, wherein the raster-graphic image file standard is selected from the group consisting of: Portable Network Graphics (PNG); Graphics Interchange Format (GIF); Tag Image File Format (TIFF); WebP; High Efficiency Video Coding (HEVC); High Efficiency Image Format (HEIF); Advanced Video Coding (AVC); and Joint Photographic Experts Group (JPEG) 2000 (e.g. JPEG-LS and PNG are utilized for the raster graphic file standard for compression, which is taught in ¶ [29] above.). Re claim 10: Beek discloses the apparatus of claim 9, wherein the raster graphic-image file standard is PNG (e.g. JPEG-LS and PNG are utilized for the raster graphic file standard for compression, which is taught in ¶ [29] above.). Re claim 11: Beek discloses the apparatus of claim 8, wherein: the LiDAR sensor comprises a first coordinate system (e.g. an X-Y-Z coordinate system is used within the system, which is taught in ¶ [29] above. In the invention, the first coordinate system used is the raw distance calculations.); the point cloud data comprises a plurality of point data sets respectively associated with a plurality of points in an environment surrounding the AV (e.g. x, Y, Z point positions or point cloud data are associated with points in an area around the vehicle, which is taught in ¶ [29] above.); each point data set comprises a 3D position of the associated point defined in the first coordinate system (e.g. each point data is a location within the X, Y, Z area around the vehicle, which is taught in ¶ [29] above.); a second coordinate system is fixedly defined with respect to the AV (e.g. the distance from the car to the object is defined, which is taught as the raw distance data in ¶ [29] above. In addition, the x, y, z coordinate is based on the position of the vehicle.); and a third coordinate system is fixedly defined in the environment (e.g. the azimuth angles of information around the vehicle is within another coordinate system, which is taught in ¶ [40] above.). Re claim 12: Beek discloses the apparatus of claim 11, wherein each point data set comprises: a distance from the LiDAR sensor to the respectively associated point (e.g. the distance from the vehicle to a point is associated with a point in the cloud, which is taught in ¶ [29] above.); and a directional parameter defining a vector from the LiDAR sensor to the respectively associated point (e.g. directions from the Lidar sensor to the object is determined by the system, which is taught in ¶ [25] and [42] above.). Re claim 13: Beek discloses the apparatus of claim 11, wherein the instructions further cause the processor to transform the 3D positions of the plurality of points in the point cloud data from the first coordinate system into the second coordinate system before the data frame is compressed (e.g. before performing a compression operation, the system transitions the 3D positions of the point found in the point cloud data from the raw data to the X, Y, Z coordinate system, which is taught in ¶ [29] above.). Re claim 14: Beek discloses the apparatus of claim 11, wherein the instructions further cause the processor to transform the 3D positions of the plurality of points in the point cloud data from the second coordinate system into the third coordinate system before the data frame is compressed (e.g. the system packs the x, y, z coordinate values into one or more 2D arrays before applying a compression method, which is taught in ¶ [29] above.). Re claim 15: Beek discloses a non-transitory computer-readable storage medium comprising instructions for compressing a point cloud that, when loaded into a processor and executed (e.g. a tangible machine-readable medium is used to store instructions of the invention to perform the features, which is taught in ¶ [103] and 104] above.), cause the processor to: receive point cloud data from a Light Detection And Ranging (LiDAR) sensor coupled to an Autonomous Vehicle (AV) (e.g. point cloud data is received from a lidar sensor, which is taught in ¶ [29] above. This system is applicable to autonomous driving systems, which is taught in ¶ [18] above.); format a portion of the received data according to a raster-graphic image file standard to create a data frame (e.g. the data is organized into one or two dimensional arrays and a compression is applied in a PNG or JPEG-LS format, which is taught in ¶ [29] above.); and compress the data frame using a lossless compression algorithm (e.g. the data is formatted into a lossless compression encoding method, which is taught in ¶ [29] above.). Re claim 16: Beek discloses the apparatus of claim 15, wherein the raster-graphic image file standard is selected from the group consisting of: Portable Network Graphics (PNG); Graphics Interchange Format (GIF); Tag Image File Format (TIFF); WebP; High Efficiency Video Coding (HEVC); High Efficiency Image Format (HEIF); Advanced Video Coding (AVC); and Joint Photographic Experts Group (JPEG) 2000 (e.g. JPEG-LS and PNG are utilized for the raster graphic file standard for compression, which is taught in ¶ [29] above.). Re claim 17: Beek discloses the apparatus of claim 16, wherein the raster-graphic image file standard is PNG (e.g. JPEG-LS and PNG are utilized for the raster graphic file standard for compression, which is taught in ¶ [29] above.). Re claim 18: Beek discloses the apparatus of claim 15, wherein: the LiDAR sensor comprises a first coordinate system (e.g. an X-Y-Z coordinate system is used within the system, which is taught in ¶ [29] above. In the invention, the first coordinate system used is the raw distance calculations.); the point cloud data comprises a plurality of point data sets respectively associated with a plurality of points in an environment surrounding the AV (e.g. x, Y, Z point positions or point cloud data are associated with points in an area around the vehicle, which is taught in ¶ [29] above.); each point data set comprises a 3D position of the associated point defined in the first coordinate system (e.g. each point data is a location within the X, Y, Z area around the vehicle, which is taught in ¶ [29] above.); a second coordinate system is fixedly defined with respect to the AV (e.g. the distance from the car to the object is defined, which is taught as the raw distance data in ¶ [29] above. In addition, the x, y, z coordinate is based on the position of the vehicle.); and a third coordinate system is fixedly defined in the environment (e.g. the azimuth angles of information around the vehicle is within another coordinate system, which is taught in ¶ [40] above.). Re claim 19: Beek discloses the apparatus of claim 18, wherein the instructions further cause the processor to transform the 3D positions of the plurality of points in the point cloud data from the first coordinate system into the second coordinate system before the data frame is compressed (e.g. before performing a compression operation, the system transitions the 3D positions of the point found in the point cloud data from the raw data to the X, Y, Z coordinate system, which is taught in ¶ [29] above.). Re claim 20: Beek discloses the apparatus of claim 19, wherein the instructions further cause the processor to transform the 3D positions of the plurality of points in the point cloud data from the second coordinate system into the third coordinate system before the data frame is compressed (e.g. the system packs the x, y, z coordinate values into one or more 2D arrays before applying a compression method, which is taught in ¶ [29] above.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Takagi discloses compression of LIDAR data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD S DICKERSON whose telephone number is (571)270-1351. The examiner can normally be reached Monday-Friday 10AM-6PM EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHAD DICKERSON/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §102, §112
Jul 18, 2025
Applicant Interview (Telephonic)
Jul 18, 2025
Examiner Interview Summary
Sep 04, 2025
Response Filed
Dec 12, 2025
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602908
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603960
IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, PROGRAM, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM COMPRISING READING A PRINTED MATTER, ANALYZING CONTENT RELATED TO READING OF THE PRINTED MATTER AND ACQUIRING SUPPORT INFORMATION BASED ON AN ANALYSIS RESULT OF THE CONTENT FOR DISPLAY TO ASSIST A USER IN FURTHER READING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12579817
Vehicle Control Device and Control Method Thereof for Camera View Control Based on Surrounding Environment Information
2y 5m to grant Granted Mar 17, 2026
Patent 12522110
APPARATUS AND METHOD OF CONTROLLING THE SAME COMPRISING A CAMERA AND RADAR DETECTION OF A VEHICLE INTERIOR TO REDUCE A MISSED OR FALSE DETECTION REGARDING REAR SEAT OCCUPATION
2y 5m to grant Granted Jan 13, 2026
Patent 12519896
IMAGE READING DEVICE COMPRISING A LENS ARRAY INCLUDING FIRST LENS BODIES AND SECOND LENS BODIES, A LIGHT RECEIVER AND LIGHT BLOCKING PLATES THAT ARE BETWEEN THE LIGHT RECEIVER AND SECOND LENS BODIES, THE THICKNESS OF THE LIGHT BLOCKING PLATES EQUAL TO OR GREATER THAN THE SECOND LENS BODIES THICKNESS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
63%
Grant Probability
86%
With Interview (+23.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month