Prosecution Insights
Last updated: April 19, 2026
Application No. 18/167,796

SYSTEM AND METHOD OF CALIBRATING CAMERA AND LIDAR SENSOR THROUH HIGH-RESOLUTION CONVERSION OF LIDAR DATA

Non-Final OA §102§103
Filed
Feb 10, 2023
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Industry Foundation Of Chonnam National University
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The IDS’s filed 02/10/2023 and 06/18/2025 have been received and considered. Claims 1 – 7, all of the claims pending in the application, have been rejected. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “a data reception unit” in claim 1. “a feature point extraction unit” in claims 1, 4, and 5. “a conversion information derivation unit” in claim 1. “a data fusing unit” in claim 1. “a conversion information extraction unit” in claim 6. Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 5, and 7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Publication No. 2022/0055652 A1 to LEE et al. (hereinafter LEE). Claim 1 Regarding claim 1, LEE teaches a system of calibrating a camera and a LiDAR sensor through high-resolution conversion of LiDAR data, the system comprising: a data reception unit for receiving each of at least one image data obtained by photographing a target object by the camera and at least one LiDAR data obtained by sensing distance and direction information on the target object by the LiDAR sensor (Fig. 1, #'s 110 and 120; "Referring to FIG. 1, a system for data fusion between heterogeneous sensors 100 according to an embodiment of the present disclosure may include a first sensor 110, a second sensor 120 different from the first sensor 110, a marker board 130, and an apparatus for data fusion 140. As an example, the first sensor 110 and the second sensor 120 may be one of various sensors, such as a camera sensor, a LiDAR sensor, or a laser sensor.", Paragraph [0044]; "A communication device 141 in the apparatus for data fusion 140 may receive, from the first sensor 110 and the second sensor 120, data of the marker board 130 photographed by the first sensor 110 and the second sensor 120. To this end, the communication device 141 may be in wired or wireless connection with the first sensor 110 and the second sensor 120, and may receive data from both sensors 110 and 120.", Paragraph [0047]); PNG media_image1.png 653 583 media_image1.png Greyscale a feature point extraction unit for extracting feature points of each of the received image data and LiDAR data ("Based on a field of view (FOV) of the camera sensor, the processor 142 may segment the point cloud data collected by the LiDAR sensor. At this time, the field of view of the camera sensor may be extracted using the unique calibration parameters of the camera sensor. The segmented point cloud data may include the sensed marker board 130 as well as various objects around the marker board 130…"The processor 142 may identify a plane having the same normal and curvature by calculating normals and curvatures of all objects, and may thus extract only the marker board 130 based on the point cloud data corresponding to the field of view of the camera sensor. At this time, the identified plane may correspond to the marker board 130, and the processor 142 may extract only the identified plane, and remove the remaining parts.", Paragraphs [0060 - 0061]); a conversion information derivation unit for deriving image conversion information for fusing the feature points of each image data with the extracted feature points of the image data ("First, when one of the heterogeneous sensors is the camera sensor, the processor 142 may estimate unique calibration parameters of the camera sensor using a checker board, wherein the unique calibration parameters correspond to intrinsic characteristics, such as focal length, a distortion, and an image center, of the camera sensor…As a result of performing calibration using the unique calibration parameters of the camera sensor, the processor 142 may obtain, for example, a camera matrix, a distortion coefficient, and a camera projection matrix. At this time, the camera projection matrix may be obtained from the product of an intrinsic matrix and an extrinsic matrix, as shown in Equation 1 and the intrinsic matrix I may be decomposed into a product of a 2D translation matrix, a 2D scaling matrix, and a 2D shear matrix, as shown in Equation 2.", Paragraphs [0056 - 0057], and deriving LiDAR conversion information for fusing the feature points of each LiDAR data with the feature points of the image data and the derived image conversion information ( First, the apparatus for data fusion 140 may translate 3D point cloud data identified by the LiDAR sensor into 2D point cloud data, and may replace the intensity value of the 2D point cloud data with a color value for a pixel of 2D image data at the same position as the translated 2D point cloud data. At this time, an alignment score between the pixel and the point may be calculated based on a ratio of the number of pixels of the image data accurately mapped to points of the point cloud data translated into RGB.", Paragraph [0104]); and a data fusing unit for fusing at least two or more feature points of the image data with the derived image conversion information ("wherein the processor may be configured to identify image data and point cloud data for a search area by each of the camera sensor and the LiDAR sensor that are calibrated using a marker board having a hole", Paragraph [0023), where this is a common step in the calibration process of a sensor, and fusing at least two or more feature points of the LiDAR data with the derived LiDAR conversion information, ("wherein the processor may be configured to identify image data and point cloud data for a search area by each of the camera sensor and the LiDAR sensor that are calibrated using a marker board having a hole", Paragraph [0023), where this is a common step in the calibration process of a sensor, to fuse the fused feature points of the image data and the fused feature points of the LiDAR data ("Then, the processor 142 may perform calibration between the heterogeneous sensors using the determined translation vector to enable matching between the coordinate system of the first sensor 110 and the coordinate system of the second sensor 120, and may thus fuse the identified data from the first sensor 110 and the identified data from the second sensor 120.", Paragraph [0050]; "At S570, the apparatus of data fusion 140 may perform calibration between the camera sensor and the LiDAR sensor by projecting the point cloud data of the LiDAR sensor onto the image data of the camera sensor using the determined translation vector, and thereby generate the fusion data. In addition, the apparatus of data fusion 140 may more accurately detect distant targets using the calibrated camera sensor and LiDAR sensor.", Paragraph [0089]). Claim 3 Regarding claim 3, dependent on claim 1, LEE teaches the invention as claimed in claim 1. LEE further teaches wherein the image conversion information is a conversion matrix between each image data (Rejected as applied to claim 1), where the conversion matrix between image data would be generally required to perform the camera calibration, and the LiDAR conversion information is a conversion matrix between each LiDAR data (Rejected as applied to claim 1), where the conversion matrix between point cloud data would be generally required to perform the LiDAR sensor calibration. Claim 5 Regarding claim 5, dependent on claim 1, LEE teaches the invention as claimed in claim 1. LEE further teaches wherein the feature point extraction unit extracts feature points from the fused LiDAR data (Rejected as applied to claim 1). Claim 7, an independent method claim, is rejected for the same reasons as applied to claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 4, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2022/0055652 A1 to LEE et al. (hereinafter LEE) in view of Non-Patent Literature "An Effective Camera-to-Lidar Spatiotemporal Calibration Based on a Simple Calibration Target" to Grammatikopoulos et al. (hereinafter Grammatikopoulos). Claim 2 Regarding claim 2, dependent on claim 1, LEE teaches the invention as claimed in claim 1. LEE, although captures images by a camera and LiDAR sensor of a marker board for the purpose of sensor calibration, he does not explicitly teach wherein any one of the image data and the LiDAR data is identified by projecting the target object onto a marker board. However, Grammatikopoulos teaches wherein any one of the image data and the LiDAR data is identified by projecting the target object onto a marker board (Figure 9; "The calibration was performed following the Matlab implementation of this method [15]. In Figure 9, four such frames of the chessboard are presented, along with the corresponding Lidar points projected on top (point stripes in red) and after the determination of the camera’s exterior orientation.", Section 4. Evaluation and Further Results). PNG media_image2.png 348 540 media_image2.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of LEE to incorporate projecting the LiDAR points onto the chessboard, as disclosed by Grammatikopoulos. LEE generally teaches using the camera and the LiDAR sensors to capture an image of the same marker board for calibration purposes, as shown in figure 1; Grammatikopoulos shows, using Figure 9, a more accurate methodology for calibrating a sensor by projecting onto a chessboard. It would have been obvious to select the more accurate methodology to implement in the calibration process of LEE. Claim 4 Regarding claim 4, dependent on claim 2, LEE, in view of Grammatikopoulos, teaches the invention as claimed in claim 2. LEE does not teach wherein the feature point extraction unit derives a plurality of edge information of which the edges are boundaries of the marker board, and connects the plurality of derived edge information with straight lines, thereby deriving intersection points at which the edge information and the straight lines intersect. However, Grammatikopoulos further teaches wherein the feature point extraction unit derives a plurality of edge information of which the edges are boundaries of the marker board, and connects the plurality of derived edge information with straight lines, thereby deriving intersection points at which the edge information and the straight lines intersect (Figure 9; "According to Zhou et al.’s method, the common observations of the 3D chessboard edges extracted automatically in both the camera and the Lidar system allow the accurate estimation of the external parameters. The calibration was performed following the Matlab implementation of this method [15]. ", Section 4. Evaluation and Further Results). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of LEE, in view Grammatikopoulos, to incorporate the use of a method that automatically extracts the chessboard edges by both the Camera and LiDAR sensors, as disclosed by Grammatikopoulos. The suggestion/motivation for doing so would have been to allow the system to align features based on real-world objects in the scene by comparing edges or for sensor fusion calibration purposes. Claim 6 Regarding claim 6, dependent on claim 1, LEE teaches the invention as claimed in claim 1. LEE, although teaches the use of the translation information, does not explicitly teach wherein the conversion information extraction unit derives rotation information and translation information of any one of the image data and the LiDAR data in a three-dimensional space. However, Grammatikopoulos teaches wherein the conversion information extraction unit derives rotation information and translation information of any one of the image data and the LiDAR data in a three-dimensional space ("Finally, the calibrated relative rotations and translations between the four cameras (camera rig) and the Lidar sensor of our mobile mapping system were assessed during a SLAM and a texture mapping procedure.", Section 4. Evaluation and Further Results). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of LEE to incorporate the use of both the rotation and translation information, as disclosed by Grammatikopoulos. The suggestion/motivation for doing so would have been to obtain more information on the sensors to further increase the accuracy of extracted data for the generation of a 3D map. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/ Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Feb 10, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month