DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/02/2024 & 01/16/2026 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Status of Claims
This action is in reply to the application filed on 08/02/2024.
Claims 1-10 are currently pending and have been examined.
Claims 1-10 are currently rejected.
This action is made NON-FINAL.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: “predictive navigation system 100”. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 2-9 recite “a predicted navigation system” in their respective preambles. “A predicted navigation system” is also recited in claim 1. It is unclear if the applicant means to claim two different navigation systems or further claim the same navigation system. To overcome this rejection examiner suggests amending the recitations of “a predicted navigation system” in each of the dependent claims to “the predicted navigation system”. For purposes of examination the examiner is interpreting them all as the same predicted navigation system.
Claim 8 recites “the image frame rate”, the frame rate, and the “platform sensor”. There is insufficient antecedent basis for these limitations.
Claim 9 recites “a pilot” which is also recited in claim 1. It is unclear if the applicant means to claim two different pilots or further claim the same pilot. To overcome this rejection examiner suggests amending the recitation of “a pilot” in claim 9 to “the pilot”. For purposes of examination the examiner is interpreting it as the same pilot.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-7 and 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hedman et. al. (US 6,157,875), herein Hedman in view of Farris et. al. (US 2025/0251242), herein Farris and O’Leary et. al. (US 2025/0074595), herein O’Leary.
Regarding claim 1:
Hedman teaches:
A method (an image guided system and method [col 1, lines 66-67]) for generating a predictive navigation system (navigation means for guiding the weapon to the aimpoint marked on the image template. [col 2, lines 18-20]) comprising:
defining, based on a mission tasking (a pilot utilizes the aimpoint selection device 25 to identify the target aimpoint, which is subsequently marked on the digital image as described further below. The aimpoint may alternatively be selected well in advance by a mission planner, who then physically tags the aimpoint on the image from image detector 15 [col 4, lines 51-56]), a mission route (The navigational direction can be pre-planned or can be determined during flight by the pilot prior to weapon launch [col 5, lines 9-11]);
identifying a plurality of mission planning images (Means for selecting an aimpoint for the target in the digital image are provided with the invention, and preferably comprise an aimpoint selection device 25 such as a pointing device [col 4, lines 48-51]), wherein the plurality of mission planning images correspond to a predetermined mission route (the image template generating software includes program means for carrying out the operations of marking a selected aimpoint onto the digital image from image sensor 15, adding GPS coordinates for the aimpoint from GPS sensor 30 to the digital image, and generating an image template from the digital image, the aimpoint marked on the digital image, and the GPS coordinates added to the digital image. The image template generated by the programming utilizes key geographical features of the digital image which are most easily recognizable, together with the aimpoint and the GPS coordinates for the aimpoint. Preferably, the image template also includes flight orientation data for the aircraft at the time of weapon launch [col 5, lines 32-44]);
identifying a plurality of geospatial data corresponding to the plurality of mission planning images (the image template generating software includes program means for carrying out the operations of marking a selected aimpoint onto the digital image from image sensor 15, adding GPS coordinates for the aimpoint from GPS sensor 30 to the digital image, and generating an image template from the digital image, the aimpoint marked on the digital image, and the GPS coordinates added to the digital image [col 5, lines 32-38]);
providing to a model (At step 125, an image template 130 is generated by template generation software associated with mission planner processor 40. The template generation software processes the digitized image of the target area from step 105 and step 120, the flight orientation data from step 110, and the selected aimpoint and corresponding GPS coordinate from step 115, to create image template 130 [col 7, lines 14-20]):
a mission plan including a [plurality] of location markers that correspond to locations along the mission route (At step 115, the aimpoint is selected and the positional coordinates of the aimpoint are determined [col 6, lines 57-58]);
the plurality of mission planning images (At step 105 a three-dimensional or two-dimensional image of the target area is generated or acquired from one of a plurality of sources such as photographs, maps, synthetic aperture radar image, or an infrared image, which are generated by image sensor 15 or another source. The image may be generated on-board, or prior to flight [col 6, lines 42-48]), and
the plurality of geospatial data (The GPS Detector 30 can be used to determine the location of the aircraft, the target area generally, as well as the aimpoint [col 5, lines 2-4]);
operating the model to generate a plurality of predicted [time sequenced] and geo-sequenced images (At step 125, an image template 130 is generated by template generation software associated with mission planner processor 40. The template generation software processes the digitized image of the target area from step 105 and step 120, the flight orientation data from step 110, and the selected aimpoint and corresponding GPS coordinate from step 115, to create image template 130 [col 7, lines 14-20]),
wherein the plurality of [time sequenced] and geo-sequenced images depicts landscapes along the mission plan corresponding to the mission plan (The image detection algorithms evaluate and select specific features such as road edges, building edges, trees, streams and other physical characteristics to generate the image template 130 [col 7, lines 24-27]);
providing, to a platform control system, the plurality of predicted [time sequenced] and geo-sequenced images (At step 135, the image template 130 generated at step 125 is downloaded to the weapon or IGB from mission planner processor 40 via data link 50. [col 7, lines 30-34]),
wherein the plurality of predicted time sequenced and geo-sequenced images are provided to the platform control system before a mission (At step 135, the image template 130 generated at step 125 is downloaded to the weapon or IGB from mission planner processor 40 via data link 50. This step may be carried out in flight just prior to launch or prior to flight in cases where mission planner processor 40 is external to the aircraft [col 7, lines 30-34]); and
Hedman does not explicitly teach, however Farris teaches:
a mission plan including a plurality of location markers that correspond to locations along the mission route (Tiles 101 may include various topographical features such as man-made structures 151 and natural formations 152 [0029]);
operating the model to generate a plurality of predicted time sequenced and geo-sequenced images (In an implementation, LiDAR CNN 531 and EO CNN 532 are trained on training data 545 which includes EO image data 546 and LiDAR image data 547. Training data 545 includes image data including identifiable landmarks on images of terrain [0069]),
wherein the plurality of time sequenced and geo-sequenced images depicts landscapes along the mission plan corresponding to the mission plan (The 2D point cloud image is ingested by the LiDAR-trained CNN to detect landmarks in the image. When a landmark is identified, the model outputs latitude and longitude information of the landmark. The location data can then be used to extrapolate the latitude and longitude of the aircraft when the LiDAR image was taken [0020]);
providing, to a platform control system, the plurality of predicted time sequenced and geo-sequenced images (In an implementation, LiDAR CNN 531 and EO CNN 532 are trained on training data 545 which includes EO image data 546 and LiDAR image data 547. Training data 545 includes image data including identifiable landmarks on images of terrain [0069]),
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Hedman to include the teachings as taught by Farris with a reasonable expectation of success. Both are in the same field of endeavor of aeronautical navigation. Farris additionally teaches the benefits of “while EO images may be sufficient for landmark identification while flying in clear weather during the day, during nighttime, thermal imaging sensors may produce more useful imaging data than the EO sensors. Similarly, where Light Detection and Ranging (LiDAR) sensing may be degraded in rainy conditions, a longer wavelength imaging modality such as radar may produce more useful imaging data than LiDAR sensors. By processing image data of multiple modalities of terrain over which an aircraft is flying to ascertain the aircraft's location and direction of travel, the aircraft can continually receive reliable location data for navigation. Thus, aircraft can fly missions using autonomous navigation in low-visibility environments or in areas where Global Positioning System (GPS) signals are unavailable for navigation [Farris, 0015]”.
Hedman in view of Farris do not explicitly teach, however O’Leary teaches:
presenting, during the mission, the predicted time sequenced and geo-sequenced images to a pilot in real mission time (One method of enabling the display methods described above may to provide multiple aircraft sensors that cover the entire sphere around the airplane. Images (data) from these sensors may be “stitched” together to form a single, spherical image that may be viewed from the inside. A selected portion of this view may be provided to device display 136. Building on the prior discussion of multiple viewpoints, more than one spherical image can be created. Different spherical views can be paired to provide stereoscopic views. Some spherical views may be created synthetically by interpolation or extrapolation from data from aircraft sensors. These synthetic views may be from a selected or variably selected viewpoint (spherical center). In addition to providing external aircraft data by input device 104 to device display 136 [0034]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Hedman in view of Farris to include the teachings as taught by O’Leary with a reasonable expectation of success. Both are in the same field of endeavor of detecting external environments of an aircraft. O’Leary additionally teaches the benefits of “display device 136 may be placed in lieu or in addition to flight deck windshields. In this embodiment, a pilot may use display device 136 to navigate aircraft. In some embodiments, display device 136 may contain a transparent display, wherein pilot may use display device 136 both as a window and as a display. In some embodiments, display device 136 may provide a pilot with information surrounding the aircraft such that a pilot may make informed decisions during aerial flight [O’Leary, 0044]”.
Regarding claim 2:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 1, upon which this claim is dependent.
Farris further teaches:
wherein operating the model to generate a plurality of predicted time sequenced and geo-sequenced images further includes:
(a) generating a provisional set of predicted time sequenced and geo-sequenced images (Upon receiving the 3D point cloud data, LiDAR image processor 521 processes the data to produce a 2D point cloud of data from the 3D point cloud data. To produce the 2D point cloud data, LiDAR image processor 521 may geo-rectify or orthorectify the data to remove distortions from data, then project the point cloud to a ground plane. LiDAR image processor 521 then transmits the 2D point cloud data to LiDAR CNN 531. LiDAR image processor 521 also determines the altitude of the aircraft based on ranging information embodied in the image data from LiDAR sensor 511 [0072]);
(b) reviewing the provisional set of predicted time sequenced and geo-sequenced images to determine if the provisional set depicts the planned mission region to an acceptable degree (Based on location information from the LiDAR imaging and the EO imaging, the flight control system onboard aircraft 110 computes a location of aircraft 110 by weighting the location information of the two modalities according to the respective confidence levels. For example, the flight control system may extrapolate a location of the aircraft from the landmark location information of each landmark identified in the LiDAR imaging and EO imaging and generating a composite location by aggregating (e.g., averaging) the extrapolated locations weighted according to the respective confidence metrics [0036]);
(c) if the provisional set is determined not to depict the planned mission region to an acceptable degree, rerunning the model with different weightings to generate another provisional set of predicted time sequenced and geo-sequenced images (The computing device determines a location of the aircraft based on the landmark locations identified in the LiDAR imaging data and other imaging data (step 205). In an implementation, the computing device receives the location information from the CNNs in the form of latitude and longitude. The latitude and longitude of a final or composite location of the aircraft are computed as weighted averages of the latitudes and longitudes of the landmark locations. The weighting for computing the weighted averages is based on the confidence metrics determined by the respective CNNs [0043]; The aircraft location determined based on the physical sensor data may also be used to refine the output of the CNNs to improve accuracy. [0044]); and
(d) repeating steps (a) to (c) until the provisional set of predicted time sequenced and geo-sequenced imaged are determined to depict the planned mission region to an acceptable degree (In various implementations, the computing device continually acquires imaging data and processes the data to get up-to-date location information. As its present position is determined, the computing device may execute a location verification system to check or confirm the location ascertained based on the output of the CNNs. For example, the verification system may continually calculate latitude and longitude using gyroscopic, compass, IMU, and/or accelerometer data to remove false position determinations from the convolutional neural network. The aircraft location determined based on the physical sensor data may also be used to refine the output of the CNNs to improve accuracy [0044]).
Regarding claim 3:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 1, upon which this claim is dependent.
Farris further teaches:
wherein the model is a neural network that uses a plurality of nodes to generate the plurality of predicted time sequenced and geo-sequenced images (the aircraft navigation system processes the image to detect identifiable landmarks using a trained convolutional neural network (CNN) [0016]).
Regarding claim 4:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 1, upon which this claim is dependent.
Hedman further teaches:
presenting, during the mission, the predicted time sequenced and geo-sequenced images to the pilot (At step 235, IGB processor 55 compares and correlates the image template 130 with each seeker image obtained in step 225 and processed in step 230. If a satisfactory correlation between the image template and a seeker image, step 240 below is carried out. If no correlation of the image template and the seeker image is made, step 220 is repeated wherein the image template is again scaled and rotated, and then step 235 is carried out again with the next sequential seeker image being compared to the image template [col 7, line 62 – col 8, line 3]; examiner notes that while Hedman teaches an autopilot system, it would be obvious to take the data generated by Hedman and display it like taught in O’Leary.); and
O’Leary further teaches:
presenting images captured in real time, during the mission, using a platform sensor to the pilot (display device 136 may be placed in lieu or in addition to flight deck windshields. In this embodiment, a pilot may use display device 136 to navigate aircraft. In some embodiments, display device 136 may contain a transparent display, wherein pilot may use display device 136 both as a window and as a display. In some embodiments, display device 136 may provide a pilot with information surrounding the aircraft such that a pilot may make informed decisions during aerial flight [0044]).
Regarding claim 5:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 3, upon which this claim is dependent.
Hedman further teaches:
comparing the predicted time sequenced and geo-sequenced images to the images captured in real time, during the mission, using the platform sensor (At step 235, IGB processor 55 compares and correlates the image template 130 with each seeker image obtained in step 225 and processed in step 230. If a satisfactory correlation between the image template and a seeker image, step 240 below is carried out. If no correlation of the image template and the seeker image is made, step 220 is repeated wherein the image template is again scaled and rotated, and then step 235 is carried out again with the next sequential seeker image being compared to the image template [col 7, line 62 – col 8, line 3]); and
Farris further teaches:
sending an alert to the pilot if the predicted time sequenced and geo-sequenced images do not match the real time images (the independent verification system may detect significant difference between the aircraft's presently identified location and previously identified location, the verification system will flag the data as false [0024] examiner notes that while Farris teaches an autopilot system, it would be obvious to take the data generated by Farris and display it like taught in O’Leary to a pilot.).
Regarding claim 6:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 3, upon which this claim is dependent.
Hedman further teaches:
comparing the predicted time sequenced and geo-sequenced images to the images captured in real time, during the mission, using the platform sensor (At step 235, IGB processor 55 compares and correlates the image template 130 with each seeker image obtained in step 225 and processed in step 230. If a satisfactory correlation between the image template and a seeker image, step 240 below is carried out. If no correlation of the image template and the seeker image is made, step 220 is repeated wherein the image template is again scaled and rotated, and then step 235 is carried out again with the next sequential seeker image being compared to the image template [col 7, line 62 – col 8, line 3]); and
Farris further teaches:
rerouting the platform, during the mission, using the platform control system, to reduce errors between the predicted time sequenced and geo-sequenced images and the images captured using the platform sensor (Navigation system 530 may also correct the position data for the angle of the sensor, the aircraft velocity, and/or other factors which may impact the accuracy of the determinations [0075]).
Regarding claim 7:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 1, upon which this claim is dependent.
Hedman further teaches:
sending the predicted time sequenced and geo-sequenced images to an inertial navigation system (At step 235, IGB processor 55 compares and correlates the image template 130 with each seeker image obtained in step 225 and processed in step 230 [col 7, lines 62-64]); and
reducing inertial sensor errors in the inertial navigation system using the predicted time sequenced and geo-sequenced images (Once a satisfactory correlation is made between the image template 130 and seeker image, step 240 is carried out in which IGB processor 55 updates the positional coordinates of the aimpoint of the IGB by using inertial navigation system 65 to calculate a setoff distance in inertial space. The setoff distance is based on or reference to the GPS navigation coordinates used in 210 and/or INS navigation coordinates used in step 215 [col 8, lines 3-11]).
Regarding claim 9:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 1, upon which this claim is dependent.
O’Leary further teaches:
presenting, during the mission, the predicted time sequenced and geo-sequenced images to a pilot in real mission time using a multi-axis viewing system (In some embodiments, display device 136 may be located within a cockpit of aircraft. In some embodiments, display device 136 may be placed in lieu or in addition to flight deck windshields. In this embodiment, a pilot may use display device 136 to navigate aircraft. In some embodiments, display device 136 may contain a transparent display, wherein pilot may use display device 136 both as a window and as a display. In some embodiments, display device 136 may provide a pilot with information surrounding the aircraft such that a pilot may make informed decisions during aerial flight [0044]).
Regarding claim 10:
Hedman teaches:
A mission control system (fig. 1, image guided weapon system 10) comprising:
a processor (fig. 1, mission planner processor 40) configured to receive from a model (At step 125, an image template 130 is generated by template generation software associated with mission planner processor 40. The template generation software processes the digitized image of the target area from step 105 and step 120, the flight orientation data from step 110, and the selected aimpoint and corresponding GPS coordinate from step 115, to create image template 130 [col 7, lines 14-20]) a plurality of predicted time sequenced and geo-sequenced images that depict landscapes along a mission plan (the image template generating software includes program means for carrying out the operations of marking a selected aimpoint onto the digital image from image sensor 15, adding GPS coordinates for the aimpoint from GPS sensor 30 to the digital image, and generating an image template from the digital image, the aimpoint marked on the digital image, and the GPS coordinates added to the digital image [col 5, lines 32-38]) corresponding to a mission plan (a pilot utilizes the aimpoint selection device 25 to identify the target aimpoint, which is subsequently marked on the digital image as described further below. The aimpoint may alternatively be selected well in advance by a mission planner, who then physically tags the aimpoint on the image from image detector 15 [col 4, lines 51-56]);
Hedman does not explicitly teach, however Farris teaches:
a plurality of predicted time sequenced and geo-sequenced images that depict landscapes along a mission plan (The 2D point cloud image is ingested by the LiDAR-trained CNN to detect landmarks in the image. When a landmark is identified, the model outputs latitude and longitude information of the landmark. The location data can then be used to extrapolate the latitude and longitude of the aircraft when the LiDAR image was taken [0020]);
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Hedman to include the teachings as taught by Farris with a reasonable expectation of success. Both are in the same field of endeavor of aeronautical navigation. Farris additionally teaches the benefits of “while EO images may be sufficient for landmark identification while flying in clear weather during the day, during nighttime, thermal imaging sensors may produce more useful imaging data than the EO sensors. Similarly, where Light Detection and Ranging (LiDAR) sensing may be degraded in rainy conditions, a longer wavelength imaging modality such as radar may produce more useful imaging data than LiDAR sensors. By processing image data of multiple modalities of terrain over which an aircraft is flying to ascertain the aircraft's location and direction of travel, the aircraft can continually receive reliable location data for navigation. Thus, aircraft can fly missions using autonomous navigation in low-visibility environments or in areas where Global Positioning System (GPS) signals are unavailable for navigation [Farris, 0015]”.
Hedman in view of Farris do not explicitly teach, however O’Leary teaches:
wherein the processor is further configured to present, during the mission, the predicted time sequenced and geo-sequenced images to a pilot of a vehicle in real mission time to permit the pilot of the vehicle to navigate the vehicle during execution of the mission (One method of enabling the display methods described above may to provide multiple aircraft sensors that cover the entire sphere around the airplane. Images (data) from these sensors may be “stitched” together to form a single, spherical image that may be viewed from the inside. A selected portion of this view may be provided to device display 136. Building on the prior discussion of multiple viewpoints, more than one spherical image can be created. Different spherical views can be paired to provide stereoscopic views. Some spherical views may be created synthetically by interpolation or extrapolation from data from aircraft sensors. These synthetic views may be from a selected or variably selected viewpoint (spherical center). In addition to providing external aircraft data by input device 104 to device display 136 [0034]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Hedman in view of Farris to include the teachings as taught by O’Leary with a reasonable expectation of success. Both are in the same field of endeavor of detecting external environments of an aircraft. O’Leary additionally teaches the benefits of “display device 136 may be placed in lieu or in addition to flight deck windshields. In this embodiment, a pilot may use display device 136 to navigate aircraft. In some embodiments, display device 136 may contain a transparent display, wherein pilot may use display device 136 both as a window and as a display. In some embodiments, display device 136 may provide a pilot with information surrounding the aircraft such that a pilot may make informed decisions during aerial flight [O’Leary, 0044]”.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hedman et. al. (US 6,157,875), herein Hedman in view of Farris et. al. (US 2025/0251242), herein Farris and O’Leary et. al. (US 2025/0074595), herein O’Leary in further view of Wang et. al. (CN 112213244).
Regarding claim 8:
Hedman in view of Farris and O’Leary teaches all the limitations of claim 1, upon which this claim is dependent.
Hedman in view of Farris and O’Leary does not explicitly teach, however Wang teaches:
adjusting, during the mission, the image frame rate of the predicted time sequenced and geo-sequenced images to match the frame rate of the images captured using the platform sensor (the visible light video and infrared video of the invention are based on the same fixed camera source but both frame rate and size are different, designing interpolation size matching method and frame rate matching method to make two video collected by the binocular camera are used in the same moving target segmentation algorithm [page 7]).
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Hedman in view of Farris and O’Leary to include the teachings as taught by Wang with a reasonable expectation of success. This is applying a known solution to achieve a predictable result which would be obvious to one having ordinary skill in the art.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Cristobal (US 11,410,322) discloses A device and method perform Simultaneous Localization and Mapping (SLAM). The device includes at least one processor configured to perform the SLAM method, which includes the following operations. Preprocess, in a first processing stage, a received data sequence including multiple images recorded by a camera and sensor readings from multiple sensors in order to obtain a frame sequence. Each frame of the frame sequence includes a visual feature set related to one of the images at a determined time instance and sensor readings from that time instance. Sequentially process, in a second processing stage, each frame of the frame sequence based on the visual feature set and the sensor readings included in that frame in order to generate a sequence mapping graph. Merge, in a third processing stage, the sequence mapping graph with at least one other graph, in order to generate or update a full graph.
Ma (US 10,054,445) discloses An aerial vehicle is navigated using vision-aided navigation that classifies regions of acquired still image frames as featureless or feature-rich, and thereby avoids expending time and computational resources attempting to extract and match false features from the featureless regions. The classification may be performed by computing a texture metric as by testing widths of peaks of the autocorrelation function of a region against a threshold, which may be an adaptive threshold, or by using a model that has been trained using a machine learning method applied to a training dataset comprising training images of featureless regions and feature-rich regions. Such machine learning method can use a support vector machine. The resultant matched feature observations can be data-fused with other sensor data to correct a navigation solution based on GPS and/or IMU data.
Whitely (US 2022/0377261) discloses Thermal imaging odometry and navigation systems and related techniques are provided to improve the operational flexibility of autonomous/unmanned vehicles. A thermal imaging odometry system includes a thermal imaging module configured to be coupled to an unmanned vehicle, a ranging sensor system fixed to the thermal imaging module, and a logic device. The thermal imaging module provides thermal imagery of a scene in view of the unmanned vehicle and centered about an optical axis of the thermal imaging module, where the optical axis is fixed relative to an orientation of the unmanned vehicle. The ranging sensor system provides ranging sensor data indicating a standoff distance between the thermal imaging module and a surface intersecting the optical axis of the thermal imaging module. The logic device receives thermal images of the scene and corresponding ranging sensor data and determines an estimated relative velocity of the unmanned vehicle.
De Bock (US 2018/0354641) discloses According to some embodiments, system and methods are provided, comprising receiving one or more mission objectives for an aircraft mission, and condition data at a mission execution module; generating, via the mission execution module, a mission plan executable to address at least one of the one or more mission objectives via manipulation of a power-thermal management system (PTMS); receiving the generated mission plan at the PTMS directly from the mission execution module; and automatically executing the generated mission plan to operate an aircraft. Numerous other aspects are provided.
Teng (CN 116518981) discloses The application relates to an aircraft visual navigation method based on deep learning matching and Kalman filtering, by obtaining aerial photography real-time image and real-time updated aircraft auxiliary parameter, firstly performing rough matching, then inputting the candidate matching image obtained after rough matching into precise matching network to extract high-dimensional characteristic, using a fast matching algorithm to find a position accurately matched with the position in the aerial photography real-time image on the satellite reference image according to the high-dimensional characteristic, obtaining the position coordinates of a plurality of homonyms in the aerial photography real-time image, then calculating according to the coordinates of a plurality of homonymous points to obtain the position and attitude of the current aircraft, namely the visual navigation result, performing inertial navigation accumulation navigation error according to the aircraft auxiliary parameter, when the navigation error is greater than the preset threshold value, then according to the visual navigation result, The invention uses Kalman filter to correct the auxiliary parameter of the aircraft and then performs inertial navigation. The method can realize the high precision navigation in all day and all day.
Lou (CN 111238488) discloses The invention claims a heterogenous image matching aircraft precise locating method, specifically comprising the following steps: obtaining the inertial sensor information, correcting, rough matching, fine matching, aircraft position calculation. The invention adopts the location method, can obtain aircraft absolute position more accurate.
Conte (An Integrated UAV Navigation System Based on Aerial Image Matching - NPL) discloses the possibility of using geo-referenced satellite or aerial images to augment an Unmanned Aerial Vehicle (UAV) navigation system in case of GPS failure. A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data. The experimental results show that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude. It is shown that such information can be used in an automated way to compensate the drift of the UAV state estimation which occurs when only inertial sensors and visual odometer are used.
Kim (Aerial Map-Based Navigation Using Semantic Segmentation and Pattern Matching - NPL) discloses a novel approach to map-based navigation system for unmanned aircraft. The proposed system attempts label-to-label matching, not image-to-image matching, between aerial images and a map database. The ground objects can be labelled by deep learning approaches and the configuration of the objects is used to find the corresponding location in the map database. The use of the deep learning technique as a tool for extracting high-level features reduces the image-based localization problem to a pattern matching problem. This paper proposes a pattern matching algorithm that does not require altitude information or a camera model to estimate the absolute horizontal position. The feasibility analysis with simulated images shows the proposed map-based navigation can be realized with the proposed pattern matching algorithm and it is able to provide positions given the labelled objects.
Sim (Integrated Position Estimation Using Aerial Image Sequences - NPL) discloses an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.
Vinogradov (Parallel Implementation Of Machine Vision For Aircraft Navigation - NPL) discloses a machine vision based navigation for low attitude air operation is considered. An approach to navigation based on utilizing time-invariant features of aerial images, such as linear features of artificial or natural origin (roads, areal feature contours) is presented. Though, there are many works in area of remote sensing about linear features detection, only a few of them consider “live” usage on board for service needs. Some theoretical aspects of using different navigation features considered, algorithm and results of the processing are presented. Recommendations for future research are Proposed.
Yeromina (Method of reference image selection to provide high-speed aircraft navigation under conditions of rapid change of flight trajectory - NPL) discloses It is reasonable to find a new approach to the selection of the Reference Image (RI) to determine the spatial position of Unmanned Aerial Vehicles (UAVs) with Correlation-Extreme Navigation System (CENS). The selection has to be made from a set of images available on board. This is due to the need to increase the speed of secondary processing of information by the CENS. This is due to high flight speeds and possible intensive maneuvering of UAVs. The use of multi-spectral sensors with different resolution also leads to the need to increase the speed of secondary processing systems of combined CENS. The paper presents an improved model of the Decisive Function (DF) formation process for a set of reference images. Using this model, the problem of method and algorithm development for rational choice of RI in the CENS secondary processing system is formulated. The results of the development of an iterative method and algorithm for the selection of RI from the set of RI recorded on board the UAV is presented. The method consists in the use of iterative procedure of RI selection from the multidimensional matrix representation of the set of RI for different altitudes. Then the selection is carried out according to the angular parameters. The effectiveness of the method was confirmed by simulations. The simulation was performed using the brightness distribution of a typical fragment of the Sighting Surface (SS) image. Influence of observation conditions and sensors' resolution on the difference between the fragments of the Current Image (CI) and the reference image was studied. An algorithm for the rational choice of the RI based on the proposed method was developed. It is shown that the computational complexity of image matching in the CENS can be reduced by tens of times without loss of accuracy in determining the spatial position of an UAV. The application of the algorithm developed for the example considered in the article allows to reduce the computational complexity by 210 times.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott R Jagolinzer whose telephone number is (571)272-4180. The examiner can normally be reached M-Th 8AM - 4PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571)272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Scott R. Jagolinzer
Examiner
Art Unit 3665
/S.R.J./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665