Notice of Pre-AIA or AIA Status The present application, filed on or after 16 Mar 2013 , is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION Applicant presents Claims 1-19 for examination. The Office rejects Claims 1-190 as detailed below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The Office rejects Claims 19 and any corresponding dependent claims under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim explicitly re cites “A [computer] program …. ” This is software per se , which is non-statutory subject matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. +_+_+ Claims 1 - 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Armstrong-Crews et al. - U.S. Pub. 20220146676 +_+_+ As for Claim 1 , Armstrong-Crews teaches an information processing apparatus configured to synchronize, on a basis of speed information included in respective pieces of ranging information regarding moving objects corresponding to each other measured by a plurality of imaging apparatuses with use of light subjected to frequency continuous modulation, respective clock times of the plurality of imaging apparatuses (Fig. 4A, Vehicle 402 with two FMCW Lidars both tracking Return Point 422 of Object 410, ¶62|4: “Depicted in FIG. 4A is AV 402 that has multiple lidar sensors (two are shown for specificity), such as a first sensor 406 and a second sensor 407, which can be any type of a coherent ( or a combination of a coherent and incoherent) lidar devices capable of sensing the distance to a reflecting surface and the radial velocity of the reflecting surface of an object in the driving environment. The sensors 406 and 407 can performs scanning of the driving environment and generate return points corresponding to various objects.” Further , (¶63|1) “ In some implementations, a processing logic [i.e., an information processing apparatus] of the sensing system (e.g., sensing system 120) can synchronize the sensing frames of sensor 406 and sensor 407 so that the sensing signals are output at the same instances of time, e.g., at t, t + ∆t , t + 2∆t, t + 3∆t, etc. In other implementations the sensing frames can be staggered (for example, to reduce possible interference or to improve temporal resolution) so that one sensor outputs signals at times t, t + ∆t, t + 2∆t, t + 3∆t, whereas the other sensor outputs sensing signals at times t + ∆t/2, t + 3∆t/2, t + 5∆t/2, and so on. Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410. ”) As for Claim 2 , which depends on Claim 1, Armstrong-Crews teaches a viewpoint transformation processor configured to transform the speed information regarding the moving objects corresponding to each other into speed information as viewed from a same viewpoint on a basis of relative attitudes of the plurality of imaging apparatuses; and a clock-time corrector configured to synchronize the respective clock times of the plurality of imaging apparatuses on a basis of changes with clock time of the speed information as viewed from the same viewpoint regarding the moving objects corresponding to each other (¶63|1: “In some implementations, a processing logic [i.e., an information processing apparatus] of the sensing system (e.g., sensing system 120) can synchronize the sensing frames of sensor 406 and sensor 407 so that the sensing signals are output at the same instances of time, e.g., at t, t + ∆t, t + 2∆t, t + 3∆t, etc. In other implementations the sensing frames can be staggered (for example, to reduce possible interference or to improve temporal resolution) so that one sensor outputs signals at times t, t + ∆t, t + 2∆t, t + 3∆t, whereas the other sensor outputs sensing signals at times t + ∆t/2, t + 3∆t/2, t + 5∆t/2, and so on. Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 3 , which depends on Claim 2, Armstrong-Crews teaches wherein the speed information regarding the moving objects includes representative speed norms of regions including the moving objects (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 4 , which depends on Claim 2, Armstrong-Crews teaches wherein the viewpoint transformation processor is configured to detect a correspondence relation between the moving objects on a basis of the relative attitudes of the plurality of imaging apparatuses and distances from the imaging apparatuses to the moving objects (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 5 , which depends on Claim 2, Armstrong-Crews teaches wherein the clock-time corrector is configured to correct an offset in clock time or an offset in cycle of each of the plurality of imaging apparatuses to cause the changes with clock time of the speed information as viewed from the same viewpoint regarding the moving objects corresponding to each other to be same (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 6 , which depends on Claim 2, Armstrong-Crews teaches further comprising: a relative attitude detector configured to detect the relative attitudes of the plurality of imaging apparatuses on a basis of respective captured images of motionless objects captured by the plurality of imaging apparatuses, the motionless objects corresponding to each other (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) As for Claim 7 , which depends on Claim 6, Armstrong-Crews teaches wherein the relative attitude detector is configured to detect a correspondence relation between the motionless objects on a basis of a correspondence relation between feature points of the motionless objects included in the captured images (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) As for Claim 8 , which depends on Claim 7, Armstrong-Crews teaches wherein the relative attitude detector is configured to detect the relative attitudes of the plurality of imaging apparatuses on a basis of distances from the respective imaging apparatuses of the motionless objects corresponding to each other to the feature points and a correspondence relation between positions of the feature points in the respective captured images (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) As for Claim 9 , which depends on Claim 6, Armstrong-Crews teaches wherein the moving objects and the motionless objects are determined by image recognition of objects included in the captured images with use of a machine learning model (¶68|1: “In some implementations, evaluation metrics, such as weights a (given to radial distance mismatches), b (given to lateral distance mismatches), c (given to radial velocity mismatches), h (given to lateral velocity mismatches), and biases, such as d (against large translational velocities) and g (against large radii of rotations), or other metrics used in the evaluation measures, can be determined using a machine learning model.”) As for Claim 10 , which depends on Claim 2, Armstrong-Crews teaches further comprising: a relative attitude detector configured to detect the relative attitudes of the plurality of imaging apparatuses on a basis of estimated self-locations of the plurality of imaging apparatuses (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 11 , which depends on Claim 2, Armstrong-Crews teaches wherein the ranging information includes depth images where information regarding distances from the imaging apparatuses to the moving objects is projected on captured images acquired by the imaging apparatuses (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) As for Claim 12 , which depends on Claim 11, Armstrong-Crews teaches wherein the ranging information includes speed images where information regarding speeds of the moving objects in directions of straight lines connecting the moving objects and the imaging apparatuses is projected on the captured images (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) As for Claim 13 , which depends on Claim 1, Armstrong-Crews teaches further comprising: a model integrator configured to integrate the respective pieces of ranging information measured by the plurality of imaging apparatuses to each other in clock-time synchronization with each other (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 14 , which depends on Claim 1, Armstrong-Crews teaches wherein the ranging information includes point cloud data including the speed information (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410.”) As for Claim 15 , which depends on Claim 1, Armstrong-Crews teaches wherein the light subjected to frequency continuous modulation includes infrared light or near infrared light (¶25|10: “ For example, ‘optical’ sensing can utilize a range of light visible to a human eye ( e.g., the 380 to 700 nm wavelength range), the UV range (below 380 nm), the infrared range (above 700 nm), the radio frequency range (above 1 m), etc. In implementations, ‘optical’ and ‘light’ can include any other suitable range of the electromagnetic spectrum. ”) As for Claim 16 , which depends on Claim 1, Armstrong-Crews teaches wherein the plurality of imaging apparatuses includes respective imaging elements configured to acquire captured images of the moving objects and respective ranging sensors configured to acquire the ranging information regarding the moving objects (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) As for Claim 17 , which depends on Claim 16, Armstrong-Crews teaches wherein the ranging sensors are configured to acquire the ranging information regarding the moving objects when the moving objects appear in the captured images (¶63|10: “Each sensor can detect its respective point cloud which can be-due to different positioning and timing [i.e., speed information] of the sensing frames-somewhat different from the point cloud of the other sensor(s) even at the same times. A processing logic of the perception system (e.g., perception system 132) can identify, for each point of the first sensor cloud R 1 , the closest point of the second sensor cloud R 2 and associate the two points with the same reflecting part of the object 410 [object 410 can be moving at any speed or no speed, and even “motionless” objects can have a relative speed to a moving vehicle] .”) Claims 18-19 recite substantially the same subject matter as Claim 1 and stand rejected on the same basis accordingly. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT CLINT THATCHER whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-3588 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mon-Fri 9am-5:30pm ET and generally keeps a daily 2:30pm timeslot open for interviews. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may call the examiner to set up a time or use the USPTO Automated Interview Request (AIR) system at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao, can be reached at (571) 270-3603. Though not relied on, the Office considers the additional prior art listed in the Notice of Reference Cited form (PTO-892) pertinent to Applicant's disclosure. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Clint Thatcher/ Examiner, Art Unit 3645 /YUQING XIAO/ Supervisory Patent Examiner, Art Unit 3645