CTNF 18/231,013 CTNF 101653 Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority 02-27 AIA Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2022-134460 , filed on 8/25/2022 . Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/07/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings submitted on 8/07/2023 are in compliance with the provisions of 37 CFR 1.81. Accordingly, the drawings are being considered by the examiner. Specification The specification submitted on 8/07/2023 are in compliance with the provisions of 37 CFR 1.71. Accordingly, the specification is being considered by the examiner. Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA Claim s 1-3, 5-6, and 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (CN114332341A, "Zhou") in view of Wang et al. (WO2020194650A1, "Wang”) . Regarding claim 1, Zhou teaches a foreign body detection system comprising at least one memory storing computer-executable instructions (Zhou, Para [0120], Fig 3, where memory 32 can store information and instructions that are executable by computer) ; and at least one processor configured to access the at least one memory and execute the computer-executable instructions to (Zhou, Para [0121], Fig 3, where processor 31 can call and execute the instructions stored on memory 32) : acquire a point cloud by controlling a fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning in a scanning range for scanning at least a monitoring target and generates the point cloud (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera) ; align the point cloud and known three-dimensional shape data about the monitoring target with each other (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera) ; . However, Zhou does not teach and detect a foreign body in contact with the monitoring target, based on the point cloud and the aligned three-dimensional shape data . On the other hand, Wang teaches looking at pixels that exceed a threshold from a scanned point cloud to determine foreign object data (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data). Accordingly, it would have been obvious of one of ordinary skill in the art, before the effective filing date of the invention to have modified the foreign body detection system of Zhou in view of Wang, by applying the use of foreign object detection unit that uses clusters of voxel data from point cloud data to identify foreign object . See MPEP 2141.III KSR Rationale D. Regarding claim 2, Zhou in view of Wang teaches the foreign body detection system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: obtain, for each point included in the point cloud, a deviation amount from the monitoring target defined by the three-dimensional shape data aligned with the point cloud, and determine that the foreign body is present at the point when the deviation amount is equal to or more than a predetermined value (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data) . Regarding claim 3, Zhou in view of Wang teaches the foreign body detection system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: acquire a first point cloud by controlling the fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning at a first scanning density (Zhou, Para [0129], Fig 4, where the first point cloud density is lower than the second point cloud density) ; acquire a second point cloud by controlling the fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning at a second scanning density being a density higher than the first scanning density (Zhou, Para [0129], Fig 4, where the first point cloud density is lower than the second point cloud density) ; align the first point cloud and the known three-dimensional shape data about the monitoring target with each other (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera) ; and detect a foreign body in contact with the monitoring target, based on the second point cloud and the aligned three-dimensional shape data (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data) . Regarding claim 5, Zhou in view of Wang teaches the foreign body detection system according to claim 3, wherein the at least one processor is further configured to execute the instructions to: update the three-dimensional shape data, based on the second point cloud (Zhou, Para [0154], Fig 4, where in step S1022 the second point cloud data can be mapped on to the preset reference surface to obtain a two dimensional image, which, in combination with the depth information as disclosed in Para [0155] provides data three-dimensions) . Regarding claim 6, Zhou in view of Wang teaches the foreign body detection system according to claim 3, wherein the at least one processor is further configured to execute the instructions to: align the second point cloud and the three-dimensional shape data with each other (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera.) . Regarding claim 8, Zhou in view of Wang teaches the foreign body detection system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: obtain, for each point included in the point cloud, a deviation amount from the monitoring target defined by the three-dimensional shape data aligned with the point cloud, estimate the point indicating a foreign body when the deviation amount is equal to or more than a predetermined value, and generate a foreign body candidate point cloud being a group of foreign body candidate points as estimated points (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data, which is then registered in a foreign object map as disclosed in Para [0026]) ; divide the monitoring target into a lattice pattern in a plan view (Zhou, Para [0103], Fig 2 where the first light signal can be projected with a grid pattern) , and then count, for each of a plurality of minute ranges generated by division, the foreign body candidate point included in the minute range (Wang, Para [0026], Fig 3, where the foreign object unit 159 takes a multitude of voxels in a cluster to determine the number of foreign objects in a range) ; and determine, for each of the plurality of minute ranges, that a foreign body is present within the minute range when a count number of the foreign body candidate point exceeds a predetermined value (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data, which is then registered in a foreign object map as disclosed in Para [0026]) . Regarding claim 9, Zhou teaches a foreign body detection device comprising at least one memory storing computer-executable instructions (Zhou, Para [0120], Fig 3, where memory 32 can store information and instructions that are executable by computer) ; and at least one processor configured to access the at least one memory and execute the computer-executable instructions to (Zhou, Para [0121], Fig 3, where processor 31 can call and execute the instructions stored on memory 32) : acquire a point cloud by controlling a fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning in a scanning range for scanning at least a monitoring target and generates the point cloud (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera ); align the point cloud and known three-dimensional shape data about the monitoring target with each other (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera) ; However, Zhou does not teach and detect a foreign body in contact with the monitoring target, based on the point cloud and the aligned three-dimensional shape data . On the other hand, Wang teaches looking at pixels that exceed a threshold from a scanned point cloud to determine foreign object data (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data). Accordingly, it would have been obvious of one of ordinary skill in the art, before the effective filing date of the invention to have modified the foreign body detection system of Zhou in view of Wang, by applying the use of foreign object detection unit that uses clusters of voxel data from point cloud data to identify foreign object . See MPEP 2141.III KSR Rationale D. Regarding claim 10, Zhou teaches a computer-implemented foreign body detection method performed by a computer, the foreign body detection method comprising: acquiring a point cloud by controlling a fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning in a scanning range for scanning at least a monitoring target and generates the point cloud (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera, accessible by a computer as disclosed in Para [0120]) ; aligning the point cloud and known three-dimensional shape data about the monitoring target with each other (Zhou, Para [0130] and [0132], Fig 4, where the first and second point cloud are reconstructed from the third and fourth light signal respectively, which, as mentioned in Para [0094] are the first and second images of a depth 3D camera, accessible by a computer as disclosed in Para [0120]) ; However, Zhou does not teach and detecting a foreign body in contact with the monitoring target, based on the point cloud and the aligned three-dimensional shape data . On the other hand, Wang teaches looking at pixels that exceed a threshold from a scanned point cloud to determine foreign object data (Wang, Para [0025], Fig 3, where the point cloud extraction unit 155 looks at pixels exceeding a predetermined threshold in the significance map to determine a foreign object from voxel data, accessible by a computer as disclosed in Para [0120]). Accordingly, it would have been obvious of one of ordinary skill in the art, before the effective filing date of the invention to have modified the foreign body detection system of Zhou in view of Wang, by applying the use of foreign object detection unit that uses clusters of voxel data from point cloud data to identify foreign object . See MPEP 2141.III KSR Rationale D. Regarding claim 11, Zhou in view of Wang teaches a non-transitory computer readable medium storing a program for causing a computer to execute the computer-implemented foreign body detection method according to claim 10 (Zhou Para [0225], where instructions can be stored on a computer-readable medium 101, or computer-readable storage medium as disclosed in Para [0227]) . 07-21-aia AIA Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Wang and Grauer et al. (JP 2015527761 A, “Grauer”) . Regarding claim 4, Zhou in view of Wang teaches the foreign body detection system according to claim 3, wherein the at least one processor is further configured to execute the instructions to: However, Zhou in view of Wang does not teach acquire the first point cloud by controlling the fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning at the first scanning density in a first scanning range for scanning at least the monitoring target; and acquire the second point cloud by controlling the fixed-point three-dimensional LiDAR scanner in such a way that the fixed-point three-dimensional LiDAR scanner performs scanning at the second scanning density in a second scanning range for scanning at least the monitoring target, the second scanning range being narrower than the first scanning range. . On the other hand, Grauer teaches the use of multiple fields of view when scanning for an object at different depths for both narrow and wider scanning ranges (Grauer, Para [0044], Fig 18, where the wide FOI 140 and narrow FOI disclosed apply to Zhou, Para [0130] and [0132], Fig 4, first and second point cloud respectively when scanning for a target object). Accordingly, it would have been obvious of one of ordinary skill in the art, before the effective filing date of the invention to have modified the foreign body detection system of Zhou in view of Wang and Grauer, by applying the wider scanning range for the first point cloud and the narrower point range for the second point cloud to observe objects in a variety of conditions. See MPEP 2141.III KSR Rationale G . 07-21-aia AIA Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Wang and Steine (JP 2022510345 A, “Steine”) . Regarding claim 7, Zhou in view of Wang teaches the foreign body detection system according to claim 1, wherein the monitoring target includes a runway or a taxiway . However, Zhou in view of Wang does not teach the foreign body detection system according to claim 1 , wherein the monitoring target includes a runway or a taxiway . On the other hand, Steine teaches the use of a detection device being used on a runway or taxiway for safe landing or aircrafts at airports (Steine, Para [0047], Fig 1, where detection unit 20 can monitor the terrain around device 1 which includes a runway or taxiway as disclosed in Para [0043]). Accordingly, it would have been obvious of one of ordinary skill in the art, before the effective filing date of the invention to have modified the foreign body detection system of Zhou in view of Wang and Steine, by using the foreign detection system on a runway or taxiway for the safe landing of aircrafts at airports. See MPEP 2141.III KSR Rationale D. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAKI HAWKINS whose telephone number is (571)272-6595. The examiner can normally be reached Monday-Friday 7:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, YUQING XIAO can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZAKI KEHINDE HAWKINS/ Examiner, Art Unit 3645 /YUQING XIAO/ Supervisory Patent Examiner, Art Unit 3645 Application/Control Number: 18/231,013 Page 2 Art Unit: 3645 Application/Control Number: 18/231,013 Page 3 Art Unit: 3645 Application/Control Number: 18/231,013 Page 4 Art Unit: 3645 Application/Control Number: 18/231,013 Page 5 Art Unit: 3645 Application/Control Number: 18/231,013 Page 6 Art Unit: 3645 Application/Control Number: 18/231,013 Page 7 Art Unit: 3645 Application/Control Number: 18/231,013 Page 8 Art Unit: 3645 Application/Control Number: 18/231,013 Page 9 Art Unit: 3645 Application/Control Number: 18/231,013 Page 10 Art Unit: 3645 Application/Control Number: 18/231,013 Page 11 Art Unit: 3645 Application/Control Number: 18/231,013 Page 12 Art Unit: 3645