Prosecution Insights
Last updated: April 19, 2026
Application No. 18/367,425

DATA FUSION METHOD AND APPARATUS FOR LiDAR SYSTEM AND READABLE STORAGE MEDIUM

Non-Final OA §103
Filed
Sep 12, 2023
Examiner
RAHMAN, MOHAMMAD J
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Innovusion, Inc.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
685 granted / 868 resolved
+20.9% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
41 currently pending
Career history
909
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
56.0%
+16.0% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 868 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Detailed Action This office Action is in response to an application filed on 09/12/2023, in which claims 1-15 are pending and are being examined . Priority Acknowledgement is made of applicant’s claim for foreign priority under 35 U.S.C § 119(a)-(d). Claimed foreign priority to CHINA 202211140938.4 filed on 09/20/2022 . The certified copy of priority has been filed on 11/01/2023 . Information Disclosure Statement This information disclosure statement (IDS) submitted on 09/12/2023, 05/10/2024, and 03/06/2026 . The submission is in compliance with the provisions of 37 CFR 1.97 and 37 CFR 1.98. Accordingly, the information disclosure statement is being considered by the examiner . Examiner’s Note Claims 1- 11 refer to "A data fusion method ”, Claims 12 -1 3 refer to "A data fusion apparatus ”, Claim 14 refer s to " A computer device ”, and Claim 15 refers to "A non-transitory computer -readable storage medium”. Claims 12 - 15 are similarly rejected in light of rejection of claims 1- 11 , any obvious combination of the rejection of claims 1- 11 , or the differences are obvious to the ordinary skill in the art. It is requested to keep the scope of all the independent claims similar for advancing the prosecution. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20220244057 A1) , hereinafter Wang, in view of Omar et al. ( US 20230306621 A1 ) , hereinafter Omar . Regarding claim 1, Wang discloses a data fusion method for a system, wherein the system comprises a source and at least one secondary system , and the data fusion method comprises (Abstract) : obtaining a first point cloud data set of the system at a first time point and a second point cloud data set of the system at a second time point separately, wherein the first point cloud data set comprises first point cloud data of the source and first point cloud data of the at least one secondary, and the second point cloud data set comprises second point cloud data of the source and second point cloud data of the at least one secondary (Fig. 2 , [0022] ) ; determining a plurality of candidate transformation matrix sets based on the first point cloud data set, wherein each candidate transformation matrix set corresponds to one secondary and comprises a plurality of candidate transformation matrices for transforming point cloud data of the corresponding secondary into a coordinate system of the source (Fig. 3) ; selecting a target transformation matrix from a plurality of candidate transformation matrices in each of the plurality of candidate transformation matrix sets based on the second point cloud data set ([0069]) ; and fusing point cloud data of the source and point cloud data of the at least one secondary based on a target transformation matrix corresponding to each secondary (Fig. 2-4) . Wang discloses all the elements of claim 1 but Wang does not appear to explicitly disclose in the cited section for a LiDAR system . However, Omar from the same or similar endeavor teaches for a LiDAR system ( [0004] ) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang to incorporate the teachings of Omar for alignment of image data processing ( Omar , Abstract) . Similar reasoning/motivation of modification can be applied/extended to the other related/dependent claims. Regarding claim 2 , Wang in view of Omar discloses the method according to claim 1, wherein the determining a plurality of candidate transformation matrix sets based on the first point cloud data set comprises: for each of the at least one secondary LiDAR: determining a plurality of corresponding sets of homologous points from each of first point cloud data of the secondary LiDAR and the first point cloud data of the source LiDAR; calculating, based on the plurality of corresponding sets of homologous points, a plurality of preselected transformation matrices corresponding to the secondary LiDAR, wherein a preselected transformation matrix from coordinates in the point cloud data of the secondary LiDAR to coordinates in the point cloud data of the source LiDAR is determined based on homologous points in each set of the plurality of corresponding sets of homologous points; and 29 determining a plurality of candidate transformation matrices respectively based on the plurality of preselected transformation matrices, to form a candidate transformation matrix set corresponding to the secondary LiDAR (Wang, Fig. 2-4, [0022], Omar, [0010], [0019] ) . Regarding claim 3 , Wang in view of Omar discloses the method according to claim 2, wherein the determining a plurality of candidate transformation matrices respectively based on the plurality of preselected transformation matrices, to form a candidate transformation matrix set corresponding to the secondary LiDAR comprises: applying the plurality of preselected transformation matrices to the first point cloud data of the corresponding secondary LiDAR separately to obtain a plurality of pieces of first transformed point cloud data in the coordinate system of the source LiDAR; calculating a first error value between each piece of first transformed point cloud data and the first point cloud data of the source LiDAR; and performing an iterative calculation on the corresponding preselected transformation matrix based on the first error value to determine a corresponding candidate transformation matrix (Wang, Fig. 2-4, [0022], Omar, [0010], [0019] , [0051] ) . Regarding claim 4 , Wang in view of Omar discloses the method according to claim 3, wherein the calculating a first error value between each piece of first transformed point cloud data and the first point cloud data of the source LiDAR comprises: calculating a plurality of first distances between a plurality of points in each piece of first transformed point cloud data and corresponding points in the first point cloud data of the source LiDAR; and determining the first error value based at least on the plurality of first distances (Wang, Fig. 2-4, [0022], [0051], Omar, [0010], [0019] , [0051] ) . Regarding claim 8 , Wang in view of Omar discloses the method according to claim 1, further comprising: performing orientation calibration on the first point cloud data set and/or the second point cloud data set; and removing noise or dynamic points from the first point cloud data set or the second point cloud data set (It is obvious to the ordinary skill in the art) . Regarding claim 9 , Wang in view of Omar discloses the method according to claim 1, further comprising: obtaining a third point cloud data set of the LiDAR system online at a third time point, wherein the third point cloud data set comprises third point cloud data of the source LiDAR and third point cloud data of the at least one secondary LiDAR; and correcting the plurality of selected target transformation matrices based on the third point cloud data set (Wang, Fig. 2-4, [0022], [0051], Omar, [0010], [0019] , [0051] ) . Regarding claim 12 - 15 , See Examiner’s Note. Prior art not relied on for rejection but considered, Liu et al., US 2024034289 A1, Claims 1-2 . Allowable Subject Matter Claim 5-7, and 10-11 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MOHAMMAD J RAHMAN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7190 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 9AM-5PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT David Czekaj can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-7327 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mohammad J Rahman/ Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Sep 12, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604001
SYSTEMS AND METHODS FOR BLOCK PARTITIONING AND INTERLEAVED CODING ORDER FOR MULTIVIEW VIDEO CODING
2y 5m to grant Granted Apr 14, 2026
Patent 12593050
SYSTEMS AND METHODS FOR MULTIPLE BIT RATE CONTENT ENCODING
2y 5m to grant Granted Mar 31, 2026
Patent 12593028
ENCODER WHICH GENERATES PREDICTION IMAGE TO BE USED TO ENCODE CURRENT BLOCK
2y 5m to grant Granted Mar 31, 2026
Patent 12587656
INTRA PREDICTION MODE DERIVATION-BASED INTRA PREDICTION METHOD AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12587647
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
90%
With Interview (+10.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 868 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month