CTNF 18/278,007 CTNF 100067 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Objections 07-29-01 AIA Claim 2 is objected to because of the following informalities: On line 5 of claim 2, “a plane A point could” is referenced, Examiner believes that the “point could” should be “point cloud” . Appropriate correction is required. Claim Rejections - 35 USC § 112 07-34-01 Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 recites “comprises” in both lines 3 and 8. Claim 8 recites “comprises” in both lines 2 and 8. It is unclear based on which comprises that the preamble of the claim ends. Therefore, Claims 1 and 8 and further dependent claims 2-7, 9, and 10 are indefinite. For the purpose of prosecution, the examiner has selected the first “comprises” found in each claim as the end of the preamble with the information following to be claim limitations. Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101. The claimed invention is directed to the abstract concept of performing mental steps without significantly more. The claim(s) recite(s) the following abstract concepts in BOLD of Claim 1. A joint calibration method for external parameters of vehicle-mounted laser radars, wherein a preset calibration scene comprises a plane A perpendicular to a Z-axis of a reference radar S, a plane B not perpendicular to the Z-axis of the reference radar S, and a calibration pillar E parallel to the Z-axis of the reference radar S , wherein the reference radar S and a to-be-calibrated radar C are located on a same vehicle, and the plane A, the plane B and the calibration pillar E are all located within a scanning range of the reference radar S and the to-be- calibrated radar C ; the joint calibration method comprises: (1) separately acquiring point cloud data of the reference radar S and point cloud data of the to-be-calibrated radar C to obtain corresponding plane A point clouds and corresponding plane B point clouds, and calibrating a rotation matrix by means of the plane A point clouds and the plane B point clouds; (2) rotating the plane A point cloud corresponding to the to-be-calibrated radar C through the rotation matrix, and obtaining a z component of a point cloud calibration translation matrix by means of the rotated plane A point cloud corresponding to the to- be-calibrated radar C and the plane A point cloud corresponding to the reference radar S; and (3) obtaining a calibration pillar E point cloud from point cloud data of the reference radar S, and obtaining an x component and a y component of the point cloud calibration translation matrix by means of the calibration pillar E point cloud and the rotated plane A point cloud corresponding to the to-be-calibrated radar C. Claim 8. A joint calibration method for external parameters of vehicle- mounted laser radars, wherein a preset calibration scene comprises a plane A perpendicular to a Z-axis of a reference radar S, a plane B not perpendicular to the Z- axis of the reference radar S, and a calibration pillar E parallel to the Z-axis of the reference radar S, wherein the reference radar S and a to-be-calibrated radar C are located on a same vehicle, and the plane A, the plane B and the calibration pillar E are all located within a scanning range of the reference radar S and the to-be-calibrated radar C; the joint calibration system comprises: a first module configured to acquire point cloud data of the reference radar S and point cloud data of the to-be-calibrated radar C separately to obtain corresponding plane A point clouds and corresponding plane B point clouds, and calibrate a rotation matrix by means of the plane A point clouds and the plane B point clouds ; a second module configured to rotate the plane A point cloud corresponding to the to-be-calibrated radar C through the rotation matrix, and obtain a z component of a point cloud calibration translation matrix by means of the rotated plane A point cloud corresponding to the to-be-calibrated radar C and the plane A point cloud corresponding to the reference radar S; and a third module configured to obtain a calibration pillar E point cloud from point cloud data of the reference radar S, and obtain an x component and a y component of the point cloud calibration translation matrix by means of the calibration pillar E point cloud and the rotated plane A point cloud corresponding to the to-be-calibrated radar C. Under step 1 of the eligibility analysis, we determine whether the claims are to a statutory category by considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: process, machine, manufacture, or composition of matter. The above claims are considered to be in a statutory category. Under Step 2A, Prong One , we consider whether the claim recites a judicial exception (abstract idea). In the above claim, the highlighted portion constitutes an abstract idea because, under a broadest reasonable interpretation, it recites limitation the fall into/recite abstract idea exceptions. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter that, when recited as such in a claim limitation, covers performing mathematics or mental steps. Next, under Step 2A, Prong Two , we consider whether the claim that recites a judicial exception is integrated into a practical application. In this step, we evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception. This judicial exception is not integrated into a practical application because there is no improvement to another technology or technical field; improvements to the functioning of the computer itself; a particular machine; effecting a transformation or reduction of a particular article to a different state or thing. Examiner notes that since the claimed methods and system are not tied to a particular machine or apparatus, they do not represent an improvement to another technology or technical field. Similarly, there are no other meaningful limitations linking the use to a particular technological environment. Finally, there is nothing in the claims that indicates an improvement to the functioning of the computer itself or transform a particular article to a new state. Finally, under Step 2B , we consider whether the additional elements are sufficient to amount to significantly more than the abstract idea. The additional element of acquiring point cloud data is considered necessary data gathering and is not sufficient to integrate the abstract idea into a practical application. As recited in MPEP section 2106.05(g), necessary data gathering (i.e., receiving data) is considered extra solution activity in light of Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015). The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because a first module, a second module and a third module are generic computer elements and not considered significantly more than the abstract idea. As recited in the MPEP, 2106.05(b), merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359-60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093-94. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because vehicle mounted laser radars are considered field of use or technological environment in which when applied to the judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Claims 9 and 10 recites a computer readable medium, a computer program, and a processor. These claims recite what is considered generic computer elements and not sufficient to integrate the abstract idea into a practical application. Claims 2-7 further limit the abstract ideas without integrating the abstract concept into a practical application or including additional limitations that can be considered significantly more than the abstract idea. Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA Claim (s) 1, and 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (CN 110031824 B) hereinafter Huang . Regarding Claim 1, Huang teaches wherein a preset calibration scene comprises a plane A perpendicular to a Z-axis of a reference radar S ([0046] “ This unified coordinate system can be one of the coordinate systems of the two lidars or a third-party coordinate system. Suppose that point N has coordinates ( x 0 , y 0 , z 0 ) in the first coordinate system of the first lidar,” where the x and y plane are perpendicular to the z axis) a plane B not perpendicular to the Z-axis of the reference radar S ([0046] “ This unified coordinate system can be one of the coordinate systems of the two lidars or a third-party coordinate system. Suppose that point N has coordinates ( x 0 , y 0 , z 0 ) in the first coordinate system of the first lidar,” where the z plane is not perpendicular to the z axis) , and a calibration pillar E parallel to the Z-axis of the reference radar S ([0054] “the reference object is a calibration box with known dimensions, and the four calibration points on the reference object are four non-coplanar corner points of the calibration box, specifically represented by points A, B, C, and D. It should be understood that once the coordinates of any one of the four calibration points are determined, the coordinates of the other three calibration points can be calculated based on the dimensions of the calibration box.” Where the box is 3D in shape [0063] “point A is the first intersection point among the four calibration points, and points B, C, and D are the three corner points on the top surface of the calibration box.”; [0064] “Optionally, the reference object can be a triangular pyramid, a polyhedron, etc. Once the coordinates of any corner point are known, the coordinates of any corner point of the reference object can be obtained based on the known side length of the reference object. Four points that are not on the same plane are selected from all the corner points as calibration points” where it would be obvious that a pillar is one of the shapes being offered as usable here.) , wherein the reference radar S and a to-be-calibrated radar C are located on a same vehicle ([0041] “When multiple lidar sensors are deployed at different locations in an autonomous vehicle (taking a truck as an example) to detect obstacles, joint calibration of the multiple lidar sensors is required to fuse the environmental information acquired by the multiple lidar sensors”) ; and the plane A, the plane B and the calibration pillar E are all located within a scanning range of the reference radar S and the to-be- calibrated radar C ([0045] “the lidar system 20 includes two 3D lidars, namely a first lidar with 64 lines and a second lidar with 16 lines, and the reference object 30 is located within the scanning range of the two lidars. 01-04-2026 - Page 23 It should be understood that 16-line and 64-line refer to the laser beams emitted by the lidar within its vertical scanning range. For example, a 16-line lidar means that the lidar emits 16 laser beams within its vertical scanning range to scan 360° horizontally and acquire information about the surrounding environment” where [0046] ““ This unified coordinate system can be one of the coordinate systems of the two lidars or a third-party coordinate system. Suppose that point N has coordinates ( x 0 , y 0 , z 0 ) in the first coordinate system of the first lidar,”); (1) separately acquiring point cloud data of the reference radar S and point cloud data of the to-be-calibrated radar C to obtain corresponding plane A point clouds and corresponding plane B point clouds ([0025] “In this scheme, the coordinates of the four calibration points in the first coordinate system of the lidar are obtained based on the first point cloud and the second point cloud, respectively, and the coordinates of the same four calibration points in the second coordinate system of the lidar are obtained based on the second point cloud. This is not affected by the difference in depth and density between the first point cloud and the second point cloud, which greatly improves the calibration accuracy of lidar joint calibration.”; [0067] “Based on the coordinates of the four calibration points in the first coordinate system and the second coordinate system, calculate the transformation matrix between the first coordinate system and the second coordinate system, and perform joint calibration of the first lidar and the second lidar according to the transformation matrix.”) , and calibrating a rotation matrix by means of the plane A point clouds and the plane B point clouds ([0067] “Based on the coordinates of the four calibration points in the first coordinate system and the second coordinate system, calculate the transformation matrix between the first coordinate system and the second coordinate system, and perform joint calibration of the first lidar and the second lidar according to the transformation matrix.”; [0068] “A rotation matrix is a transformation matrix that performs rotational transformations in Euclidean space. The transformation matrix includes translation and rotation. Three dimensional transformations are similar to two-dimensional transformations, using homogeneous coordinates to describe the coordinates of points in space and their transformations. The transformation matrix describing a three-dimensional transformation of space is in the form of 4×4”) ; (2) rotating the plane A point cloud corresponding to the to-be-calibrated radar C through the rotation matrix, and obtaining a z component of a point cloud calibration translation matrix by means of the rotated plane A point cloud corresponding to the to- be-calibrated radar C and the plane A point cloud corresponding to the reference radar S ([0109] “Optionally, when the position of point A remains unchanged, the calibration box is rotated to the second position II, that is, the orientation of the calibration box relative to the first lidar is changed. At this time, the two sides in the first intersecting plane are divided by a vertical plane passing through point C and parallel to the X-axis, that is, point C is the dividing point, and point C is the point in the side point cloud that is closest to the Y-axis. At this time, the first coordinate axis is the Y-axis. The side point cloud is divided with the Y-axis coordinate value of point C as the dividing reference. The point cloud with the Y-axis coordinate value of the side point cloud that is less than or equal to the dividing reference is taken as the point cloud of the first side, and the point cloud with the Y-axis coordinate value of the side point cloud that is greater than the dividing reference is taken as the point cloud of the second side”) ; and (3) obtaining a calibration pillar E point cloud from point cloud data of the reference radar S, and obtaining an x component and a y component of the point cloud calibration translation matrix by means of the calibration pillar E point cloud and the rotated plane A point cloud corresponding to the to-be-calibrated radar C ([0111] “Optionally, when the calibration box is located in the first position I, the second lidar scans 01-04-2026 - Page 61 the second intersecting plane of the calibration box. The two adjacent sides of the second intersecting plane are divided by a vertical plane that passes through point B and is parallel to the X-axis. That is, point B is the dividing point, and the first coordinate axis of the calibration box relative to the second coordinate system is the Y-axis.”) . Regarding Claim 8 , Huang teaches a plane A perpendicular to a Z-axis of a reference radar S ([0046] “ This unified coordinate system can be one of the coordinate systems of the two lidars or a third-party coordinate system. Suppose that point N has coordinates ( x 0 , y 0 , z 0 ) in the first coordinate system of the first lidar,” where the x and y plane are perpendicular to the z axis) , a plane B not perpendicular to the Z- axis of the reference radar S ([0046] “ This unified coordinate system can be one of the coordinate systems of the two lidars or a third-party coordinate system. Suppose that point N has coordinates ( x 0 , y 0 , z 0 ) in the first coordinate system of the first lidar,” where the z plane is not perpendicular to the z axis) , and a calibration pillar E parallel to the Z-axis of the reference radar S ([0054] “the reference object is a calibration box with known dimensions, and the four calibration points on the reference object are four non-coplanar corner points of the calibration box, specifically represented by points A, B, C, and D. It should be understood that once the coordinates of any one of the four calibration points are determined, the coordinates of the other three calibration points can be calculated based on the dimensions of the calibration box.” Where the box is 3D in shape [0063] “point A is the first intersection point among the four calibration points, and points B, C, and D are the three corner points on the top surface of the calibration box.”; [0064] “Optionally, the reference object can be a triangular pyramid, a polyhedron, etc. Once the coordinates of any corner point are known, the coordinates of any corner point of the reference object can be obtained based on the known side length of the reference object. Four points that are not on the same plane are selected from all the corner points as calibration points” where it would be obvious that a pillar is one of the shapes being offered as usable here.) , wherein the reference radar S and a to-be-calibrated radar C are located on a same vehicle ([0041] “When multiple lidar sensors are deployed at different locations in an autonomous vehicle (taking a truck as an example) to detect obstacles, joint calibration of the multiple lidar sensors is required to fuse the environmental information acquired by the multiple lidar sensors”) ; and the plane A, the plane B and the calibration pillar E are all located within a scanning range of the reference radar S and the to-be-calibrated radar C ([0045] “the lidar system 20 includes two 3D lidars, namely a first lidar with 64 lines and a second lidar with 16 lines, and the reference object 30 is located within the scanning range of the two lidars. 01-04-2026 - Page 23 It should be understood that 16-line and 64-line refer to the laser beams emitted by the lidar within its vertical scanning range. For example, a 16-line lidar means that the lidar emits 16 laser beams within its vertical scanning range to scan 360° horizontally and acquire information about the surrounding environment” where [0046] ““ This unified coordinate system can be one of the coordinate systems of the two lidars or a third-party coordinate system. Suppose that point N has coordinates ( x 0 , y 0 , z 0 ) in the first coordinate system of the first lidar,”); a first module ([0135] “In the implementation process, each step or module of the above method can be completed by the integrated logic circuit in the hardware of the processor element or by instructions in the form of software.”, i.e., a module in the IC or the processor.) configured to acquire point cloud data of the reference radar S and point cloud data of the to-be-calibrated radar C separately to obtain corresponding plane A point clouds and corresponding plane B point clouds ([0025] “In this scheme, the coordinates of the four calibration points in the first coordinate system of the lidar are obtained based on the first point cloud and the second point cloud, respectively, and the coordinates of the same four calibration points in the second coordinate system of the lidar are obtained based on the second point cloud. This is not affected by the difference in depth and density between the first point cloud and the second point cloud, which greatly improves the calibration accuracy of lidar joint calibration.”; [0067] “Based on the coordinates of the four calibration points in the first coordinate system and the second coordinate system, calculate the transformation matrix between the first coordinate system and the second coordinate system, and perform joint calibration of the first lidar and the second lidar according to the transformation matrix.”) , and calibrate a rotation matrix by means of the plane A point clouds and the plane B point clouds ([0067] “Based on the coordinates of the four calibration points in the first coordinate system and the second coordinate system, calculate the transformation matrix between the first coordinate system and the second coordinate system, and perform joint calibration of the first lidar and the second lidar according to the transformation matrix.”; [0068] “A rotation matrix is a transformation matrix that performs rotational transformations in Euclidean space. The transformation matrix includes translation and rotation. Three dimensional transformations are similar to two-dimensional transformations, using homogeneous coordinates to describe the coordinates of points in space and their transformations. The transformation matrix describing a three-dimensional transformation of space is in the form of 4×4”) ; a second module ([0135] “In the implementation process, each step or module of the above method can be completed by the integrated logic circuit in the hardware of the processor element or by instructions in the form of software.”, i.e., a module in the IC or the processor.) configured to rotate the plane A point cloud corresponding to the to-be-calibrated radar C through the rotation matrix, and obtain a z component of a point cloud calibration translation matrix by means of the rotated plane A point cloud corresponding to the to-be-calibrated radar C and the plane A point cloud corresponding to the reference radar S ([0109] “Optionally, when the position of point A remains unchanged, the calibration box is rotated to the second position II, that is, the orientation of the calibration box relative to the first lidar is changed. At this time, the two sides in the first intersecting plane are divided by a vertical plane passing through point C and parallel to the X-axis, that is, point C is the dividing point, and point C is the point in the side point cloud that is closest to the Y-axis. At this time, the first coordinate axis is the Y-axis. The side point cloud is divided with the Y-axis coordinate value of point C as the dividing reference. The point cloud with the Y-axis coordinate value of the side point cloud that is less than or equal to the dividing reference is taken as the point cloud of the first side, and the point cloud with the Y-axis coordinate value of the side point cloud that is greater than the dividing reference is taken as the point cloud of the second side”) ; and a third module ([0135] “In the implementation process, each step or module of the above method can be completed by the integrated logic circuit in the hardware of the processor element or by instructions in the form of software.”, i.e., a module in the IC or the processor.) configured to obtain a calibration pillar E point cloud from point cloud data of the reference radar S, and obtain an x component and a y component of the point cloud calibration translation matrix by means of the calibration pillar E point cloud and the rotated plane A point cloud corresponding to the to-be-calibrated radar C ([0111] “Optionally, when the calibration box is located in the first position I, the second lidar scans 01-04-2026 - Page 61 the second intersecting plane of the calibration box. The two adjacent sides of the second intersecting plane are divided by a vertical plane that passes through point B and is parallel to the X-axis. That is, point B is the dividing point, and the first coordinate axis of the calibration box relative to the second coordinate system is the Y-axis.”) . Regarding Claim 9, Huang teaches the limitations of claim 1. Huang further teaches a computer-readable storage medium, having a computer program stored thereon, wherein the when the computer program is executed by a processor ([0138] “embodiments of the present invention provide a readable storage medium having a computer program stored thereon, which is executed by a processor to implement the method described in any of the above implementations.”) Regarding Claim 10, Huang teaches the limitations of claim 1. Huang further teaches a computer device, comprising a memory having a computer program stored thereon, and a processor, wherein when the computer program is executed by the processor ([0139] “The aforementioned readable storage medium can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable 01-04-2026 - Page 78 programmable read-only memory (EPROM), programmable read-only memory (PROM), read only memory (ROM), magnetic storage, flash memory, magnetic disk or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special-purpose computer.”, where the aforementioned readable storage medium ([0138] “embodiments of the present invention provide a readable storage medium having a computer program stored thereon, which is executed by a processor to implement the method described in any of the above implementations.”) . Examiner’s Note Regarding Claims 2-7, the closest prior art Huang and Xue et al. (CN 109270534 B) hereinafter Xue teach several limitations and the specifications are rejected below. Regarding Claim 2, Huang teaches the limitations of claim 1. Huang further teaches (1.1) obtaining a frame of point cloud data Ps of the reference radar S, defining, by window selection, a plane A point could PS from the point cloud data Ps ([0113] “Extracting the side point cloud of the reference object from the point cloud whose height matches the height range of the reference object includes:” where window selection of plan A is [0115] “the Z-axis of the first coordinate system is parallel to the height direction of the reference object, and the height coordinate is the Z-axis coordinate value of the point cloud in the height range of the reference object. In the first coordinate system, the Z-coordinate value of the point cloud on the top surface of the reference object is the largest, and the Z coordinate value of the point cloud on the plane where the reference object is placed is the smallest”) ,defining, by window selection, a plane B point cloud PS from the point cloud data Ps ([0118] “Extract the point cloud whose height coordinates are within the target range from the point cloud whose height matches the height range of the reference object, and use it as the side point cloud of the reference object.” Where [0119] “The target range of the Z coordinate is between the maximum and minimum coordinate values. Since the Z coordinate value of the point cloud on the top surface of the reference object is the largest and the Z coordinate value of the point cloud on the plane where the reference object is placed is the smallest, the laser point whose Z coordinate value is within the target range must be located on the side of the reference object”) ; obtaining a frame of point cloud data Pc of the to-be-calibrated radar C, defining, by window selection, a plane A point cloud PG from the point cloud data Pc ([0104] “Using the coordinates of the segmentation point in the direction of the first coordinate axis as the segmentation reference, the side point cloud is segmented to obtain the point clouds of two adjacent sides of the reference object.” Where [0107] “the two sides of the first intersecting plane are divided by a vertical plane that passes through point A and is parallel to the Y-axis, that is, point A is the dividing point. Point A is the point in the side point cloud that is closest to the X-axis. Therefore, the first coordinate axis is the X-axis. The point in the side point cloud that is closest to the X-axis is selected as the dividing point, i.e., point A is the dividing point.”) , defining, by window selection, a plane B point cloud PC from the point cloud data Pc ([0109] “the two sides in the first intersecting plane are divided by a vertical plane passing through point C and parallel to the X-axis, that is, point C is the dividing point, and point C is the point in the side point cloud that is closest to the Y-axis. At this time, the first coordinate axis is the Y-axis. The side point cloud is divided with the Y-axis coordinate value of point C as the dividing reference. The point cloud with the Y-axis coordinate value of 01-04-2026 - Page 60 the side point cloud that is less than or equal to the dividing reference is taken as the point cloud of the first side, and the point cloud with the Y-axis coordinate value of the side point cloud that is greater than the dividing reference is taken as the point cloud of the second side.” Where C is B in this case) , With regards to Claim 2, Xue teaches and extracting a centripetal plane normal vector (pg 7 paragraph 3 “The rotation and translation transformation relation of the system is calibrated by the normal vector theta of the plane of the calibration plate And its distance alpha to the camera coordinate c,i c,i system Representing the pose of the calibration plate in the ith frame image relative to the coordinate system of the camera;” where a centripetal normal vector is the is used to calculate a rotation matrix that aligns, tilts, or rotates the plane to a new orientation.) However, Huang and Xue fail to disclose the following: (1.2) forming a matrix MS = [VsB, VsB] by the centripetal plane normal vector VS of the plane A measured by the reference radar S and the centripetal plane normal vector Vs of the plane B measured by the reference radar S; forming a matrix Mc = [VCA,VcB] by the centripetal plane normal vector VC of the plane A measured by the reference radar C and the centripetal plane normal vector V of the plane B measured by the reference radar C; defining an overdetermination matrix H = MS * MCT; and (1.3) Obtaining an optimal estimated rotation matrix R according to the overdetermination matrix H. And there are no motivations absent applicant’s own disclose, to modify Huang and Xue in the manner required by the pending applications claims. Therefore, Claim 2 is allowable pending the overcoming of U.S.C 112(b), 101 and 103 of independent claim 1. Similarly, no art has been applied to further dependent claims 3-7, and they are allowable pending overcoming U.S.C 112(b), 101 and 103 of independent claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma L. Alexander whose telephone number is (571)270-0323. The examiner can normally be reached Monday- Friday 8am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine T Rastovski can be reached at (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA ALEXANDER/Patent Examiner, Art Unit 2857 /Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2857 Application/Control Number: 18/278,007 Page 2 Art Unit: 2857 Application/Control Number: 18/278,007 Page 3 Art Unit: 2857 Application/Control Number: 18/278,007 Page 4 Art Unit: 2857 Application/Control Number: 18/278,007 Page 5 Art Unit: 2857 Application/Control Number: 18/278,007 Page 6 Art Unit: 2857 Application/Control Number: 18/278,007 Page 7 Art Unit: 2857 Application/Control Number: 18/278,007 Page 8 Art Unit: 2857 Application/Control Number: 18/278,007 Page 9 Art Unit: 2857 Application/Control Number: 18/278,007 Page 10 Art Unit: 2857 Application/Control Number: 18/278,007 Page 11 Art Unit: 2857 Application/Control Number: 18/278,007 Page 12 Art Unit: 2857 Application/Control Number: 18/278,007 Page 13 Art Unit: 2857 Application/Control Number: 18/278,007 Page 14 Art Unit: 2857 Application/Control Number: 18/278,007 Page 15 Art Unit: 2857 Application/Control Number: 18/278,007 Page 16 Art Unit: 2857