Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of Claims
The following is a final office action in response to the communication filed on 01/05/2026.
Claims 1-12 and 14-21 are pending and have been examined.
Claim 13 is canceled.
Claim 21 is new.
Claims 1-12 and 14-21 are rejected.
Information Disclosure Statements
The information disclosure statements submitted on 12/17/2025, 12/18/2025, and 01/06/2026 were filed. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Regarding the Claim Objection: Claim 13 has been canceled. Accordingly, the objection has been withdrawn.
Regarding the claim rejections under 35 § USC 103: Applicant’s arguments and corresponding amendments, see pages 5-7, filed on 01/05/2026 were persuasive in regards to overcoming the previously cited rejections. However, a new ground of rejection has been made upon reviewing references provided by the Applicant in the Information Disclosure Statement filed on 12/17/2025 after the mailing of the non-final rejection on 11/04/2025. Due to Au et al., “Challenges and Opportunities of Computer Vision Application in Aircraft Landing Gear,” 2022 IEE Aerospace Conference, March 5, 2022, 10 pages, being used to create the new ground of rejection, this action has been made final. The new ground of rejection has been presented for applicant’s consideration in the following, Claim Rejections - 35 USC § 103, section.
Claim Objections
Claim 1 is objected to because of the following informalities:
Claim 1, Line 1 reads, “An sensing system,” when it should read a , “[[An]] A sensing system,” as previously written/presented in the preliminary amendment filed on 05/14/2025.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1-5, 7-12, 14, 16-18, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Gu, (US 2025/0131836 A1, hereinafter Gu) in view of Franzini et al. (US 2025/0111794 A1, hereinafter Franzini), further in view of Au et al. (“Challenges and Opportunities of Computer Vision Application in Aircraft Landing Gear,” 2022 IEE Aerospace Conference, March 5, 2022, 10 pages)
Claim 1 Discloses: (Currently Amended)
“An sensing system comprising:”
Gu teaches, (Abstract, Lines 1-2) “The present invention is directed to systems and methods for monitoring activities in an aviation environment.”
“a vehicle; and a computing device connected to a first sensor and a second sensor, wherein the first sensor and the second sensor are each mounted on the vehicle,[[;]]”
Gu teaches, (Paragraph [0111], Lines 1-11) “Referring particularly to FIGS. 1, 2 and 4, there are provided about 10 or more monitoring units 22 which each include one of each of the least two sensors 26, 28, 30 and which are provided in multiple locations throughout the aviation environment near and at airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates. Monitoring units 22 are mounted on aircraft 16 and/or in locations on ground service vehicles and/or ground support vehicles 18 and equipment, ground personnel 20.”
“wherein the computing device fuses data from the first sensor and the second sensor into a unified detection report,”
Gu teaches capability, (Paragraph [0120], Lines 3-5) “to combine the two type of sensors' information by data fusion methods,” and that, (Paragraph [0131], Lines 3-5) “processed LiDAR sensor information which can result in a detection confidence score which can be associated with the sensor information,” and after the, (Paragraph [0132], Lines 1-2 & Paragraph [0133], Lines 2-8) “system 2 has detected, identified and/or classified the objects in the aviation environment … the system 2 is configured to combine the outputs of the processed sensor information from the previous steps/stages to measure, calculate and provide an output of the estimation or prediction of one or more objects' physical properties such as position, acceleration, speed and/or travel direction of any object(s) motion.”
Gu does not explicitly teach complete automation of a taxiing operation of the aircraft in response to the information in the unified detection report.
However, Gu does teach under broadest reasonable interpretation an example of partial automation by generating an alert comprising suggested corrective or evasive actions for the aircraft to a human operator in response to the unified detection report.
Gu teaches, (Paragraph [0133]) “In a final processing stage … The system 2 is able to assess the properties of the compared objects with predetermined safe operation criteria and to generate an alert (in step 106, see FIG. 2) when the system 2 has determined that the predetermined safe operation criteria may or has been violated or otherwise deviated therefrom, and to provide suggested corrective or evasive actions to a number of users in the aviation environment, particularly near and at airports,” and provides an example wherein, (Paragraph [0158], Lines 5-7) “the system 2 is configured to send an alert to a pilot, ground personnel 20 to slow/stop and/or conduct collision avoidance measures.”
Gu is not explicit in teaching carrying out appropriate taxiing operations completely automatically. However, Gu does teach the following.
Gu additionally teaches, (Paragraph [0117]) “The terms “artificial intelligence” and “intelligent algorithms” are used herein to refer to and encompass systems of data processing and analysis that are conducted by computers capable of harvesting large amounts of possible input data, including images and other information from monitoring and sensing devices, that may be processed, analysed, and categorized based on a set of rules and then may be communicated so that appropriate action may be taken, whether automatically by a system receiving the processed and analysed data or manually by at least one human operator such as Air Traffic Control officer, pilot and emergency response team.”
Franzini does explicitly teach implementing an automated taxiing operation in the context of an aircraft of a runway.
Franzini teaches, (Abstract, Lines 1-2) “A collision avoidance system for aggregating and processing data when an aircraft is on or near the ground,” and that, (Paragraph [0090], Lines 1-3) “The method 200 leverages data fusion techniques to aggregate and correlate the detections of multiple sensing systems,” as well as that, (Paragraph [0025], Lines 6-10) “the data from the non-cooperative sensing systems may be used to discriminate obstacles that are on path (e.g., on the taxiway or runway) from ones that are out of path, and thus distinguish between obstacles representing a threat against ones which do not pose any danger to the aircraft.”
Franzini additionally teaches, (Paragraph [0103], Lines 2-5) “The system 300 is also configured to be used by a downstream system for controlling movement of the aircraft on the ground, such as in autonomous taxiing operations,” and that, (Paragraph [0086], Lines 4-7) “The alerts and indications include graphical items displayed on the HMIs 142 or data that are used by a guidance system for avoiding collisions with the detected objects.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the sensor data fusion system which provides corrective actions to a human operation in the context of aircraft taxiing, such as providing a collision avoidance measure as taught by Gu, with the system of Franzini which can explicitly implement automated control steps in the context of aircraft taxing after interpreting fused sensor data in order to implement collision avoidance with an obstacle, in order to yield predictable results.
Combining the references would yield the well-known convenience benefits of implementing automated control systems, as well as the safety benefits of obstacle detection and collision avoidance in the context of an aircraft taxiing operation. As Franzini describes, (Paragraph [0007]) “It is an aim to provide an improved system and method for detecting obstacles that may present a danger to an aircraft,” and that, (paragraph [0031]) “The output systems may also comprise ownship guidance systems (e.g., comprising taxi guidance systems), which are capable of providing automated control for movement of the aircraft on the aerodrome surface. As used herein, the term “ownship” may refer to one's own aircraft, e.g., the aircraft comprising a collision avoidance system.”
“, and wherein the computing device assigns the first sensor to the second sensor after computing an auction price for a detection of the first sensor.”
Gu and Franzini do not explicitly teach assigning a the first sensor to the second sensor after computing an auction price for a detection of the first sensors.
Gu, however, notably teaches that, (Paragraph [0126]) “It will be understood that the person skilled in the art would be able to conduct data fusion (e.g. sensor calibration, time-syncing) by a variety of methods or algorithms,” a plurality of which are listed in Table 2, Stages 1-5, for example, a joint probabilistic data association (JPDA) tracker.
Based upon the preceding establishment of the level of ordinary skill in the art, it would have been obvious to a person of ordinary skill in the art to arrive at the claimed invention in light of Au.
Au is directed to, (Title) “Challenges and Opportunities of Computer Vision Applications in Aircraft Landing Gear,” and teaches a scenario wherein, (Page 6, Right Column, 2nd Paragraph, Lines 1-5) “combining camera with LiDAR or radar, data fusion can also be achieved. As a technique used often in more autonomy applications, fusion incorporates data from different sensors, usual driven by their unique features, in order to produce a more insightful, emergent observation of the detected object.”
Au is relevant to the Applicant’s disclosure due to disclosing that sensor confidence algorithms are well-known in the field of a camera and LIDAR being fused together to provide information for the vision of an aircraft, Au specifically implementing a voting algorithm.
Au teaches, (Page 6, Right Column, 2nd and 3rd Paragraphs) “in the highest level, often only the ultimate decisions from the sensors are extracted and then involved in a voting process to give an overall result … One major benefit of data fusion that is perhaps more apparent in the high level voting implementation in addition to providing collaborative insights not possible to obtain otherwise, is that extra layers of redundancy can be achieved.” The described voting process is not the same as an explicit auction fusion algorithm, but it is a well-known technique which solves an identical assignment problem.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to implement a simple substitution wherein the sensor fusion algorithms contemplated within that of Gu are replaced with that of an auction fusion algorithm, in order to yield predictable results.
The disclosures of Gu and Au differ from that of the claimed invention in that they use different algorithms to assign the sensors to one another.
However, Auction fusion algorithms are known in the art as being both common and capable of solving sensor assignment problems amongst autonomous fleet vehicles in the field of robotics to bid for vision assignments, see at least Zhang (US 2022/0250654 A1) which discloses, (Paragraph [0004]) “a vehicle of a connected fleet, or a computing system of the vehicle, can be selected to act as an auction system (e.g., an auctioneer) and send an announcement offering various perception tasks to the fleet for bidding.”
Therefore, a person of ordinary skill in the art, as relayed by Gu, would have been capable of contemplating a different fusion technique to the familiar problem of autonomous vehicle perception, the results of implementing an auction fusion algorithm of which, would have obtained predictable results.
Claim 2 Discloses: (Original)
“The sensing system of claim 1, wherein the first sensor is a LiDAR sensor.”
Gu teaches, (Paragraph [0035], Lines 1-3) “The at least two sensors can include light detection and ranging sensors (LiDAR), or other types of ranging sensors, and camera sensors.”
Claim 3 Discloses: (Original)
“The sensing system of claim 2, wherein the second sensor is an optic sensor.”
Gu teaches, (Paragraph [0035], Lines 1-3) “The at least two sensors can include light detection and ranging sensors (LiDAR), or other types of ranging sensors, and camera sensors.”
Claim 4 Discloses: (Original)
“The sensing system of claim 1, further comprising a third sensor connected to the computing device and included in the unified detection report.”
Gu teaches, (Paragraph [0111], Lines 1-11) “Referring particularly to FIGS. 1, 2 and 4, there are provided about 10 or more monitoring units 22 which each include one of each of the least two sensors 26, 28, 30 and which are provided in multiple locations throughout the aviation environment near and at airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates. Monitoring units 22 are mounted on aircraft 16 and/or in locations on ground service vehicles and/or ground support vehicles 18 and equipment, ground personnel 20.” Examiner is mapping sensor (30) as the third sensor limitation presented in claim 4.
PNG
media_image1.png
337
599
media_image1.png
Greyscale
Figure 2 portrays that all three sensors are connected to the computing device and fed into the data processing system. The data processing system then generates outputs as a unified detection report based upon all three sensors.
Gu teaches that the, (Paragraph [0133], Lines 2-8) “system 2 has detected, identified and/or classified the objects in the aviation environment … the system 2 is configured to combine the outputs of the processed sensor information from the previous steps/stages to measure, calculate and provide an output of the estimation or prediction of one or more objects' physical properties such as position, acceleration, speed and/or travel direction of any object(s) motion.”
Claim 5 Discloses: (Original)
“The sensing system of claim 1, wherein the vehicle is an aircraft.”
Gu teaches, (Paragraph [0003], Lines 7-8) “capability to detect, identify, and track movements of aircraft.”
Claim 7 Discloses: (Original)
“The sensing system of claim 1, wherein the computing device automates portions of the taxiing operation by sending instructions to a human operator of the vehicle.”
Gu teaches, (Paragraph [0133]) “In a final processing stage … The system 2 is able to assess the properties of the compared objects with predetermined safe operation criteria and to generate an alert (in step 106, see FIG. 2) when the system 2 has determined that the predetermined safe operation criteria may or has been violated or otherwise deviated therefrom, and to provide suggested corrective or evasive actions to a number of users in the aviation environment, particularly near and at airports,” and provides an example wherein, (Paragraph [0158], Lines 5-7) “the system 2 is configured to send an alert to a pilot, ground personnel 20 to slow/stop and/or conduct collision avoidance measures.”
Claim 8 Discloses: (Currently Amended)
“A method comprising:”
Gu teaches, (Abstract, Lines 1-2) “The present invention is directed to systems and methods for monitoring activities in an aviation environment.”
“detecting, with a computing device, a first field of view with a first sensor and a second field of view with a second sensor, each sensor mounted on a vehicle;”
Gu teaches, (Paragraph [0040]) “The at least two sensors can be housed in a monitoring unit and the monitoring unit is one of a plurality of spaced-apart monitoring units. One or more of said plurality of said monitoring units can be mounted at one or more locations throughout the aviation environment near and at an airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates, and aircraft.”
“fusing, with the computing device, the first field of view with the second field of view to form a unified detection report;”
Gu teaches capability, (Paragraph [0120], Lines 3-5) “to combine the two type of sensors' information by data fusion methods,” and that, (Paragraph [0131], Lines 3-5) “processed LiDAR sensor information which can result in a detection confidence score which can be associated with the sensor information,” and after the, (Paragraph [0132], Lines 1-2 & Paragraph [0133], Lines 2-8) “system 2 has detected, identified and/or classified the objects in the aviation environment … the system 2 is configured to combine the outputs of the processed sensor information from the previous steps/stages to measure, calculate and provide an output of the estimation or prediction of one or more objects' physical properties such as position, acceleration, speed and/or travel direction of any object(s) motion.”
“and automating, with the computing device, portions of a taxiing operation in response to information in the unified detection report,”
Gu does not explicitly teach complete automation of a taxiing operation of the aircraft in response to the information in the unified detection report.
However, Gu does teach under broadest reasonable interpretation an example of partial automation by generating an alert comprising suggested corrective or evasive actions for the aircraft to a human operation in response to the unified detection report.
Gu teaches, (Paragraph [0133]) “In a final processing stage … The system 2 is able to assess the properties of the compared objects with predetermined safe operation criteria and to generate an alert (in step 106, see FIG. 2) when the system 2 has determined that the predetermined safe operation criteria may or has been violated or otherwise deviated therefrom, and to provide suggested corrective or evasive actions to a number of users in the aviation environment, particularly near and at airports,” and provides an example wherein, (Paragraph [0158], Lines 5-7) “the system 2 is configured to send an alert to a pilot, ground personnel 20 to slow/stop and/or conduct collision avoidance measures.”
Gu additionally teaches, (Paragraph [0117]) “The terms “artificial intelligence” and “intelligent algorithms” are used herein to refer to and encompass systems of data processing and analysis that are conducted by computers capable of harvesting large amounts of possible input data, including images and other information from monitoring and sensing devices, that may be processed, analysed, and categorized based on a set of rules and then may be communicated so that appropriate action may be taken, whether automatically by a system receiving the processed and analysed data or manually by at least one human operator such as Air Traffic Control officer, pilot and emergency response team.”
Franzini does teach implementing a completely automated taxiing operation in the context of an aircraft of a runway.
Franzini teaches, (Abstract, Lines 1-2) “A collision avoidance system for aggregating and processing data when an aircraft is on or near the ground,” and that, (Paragraph [0090], Lines 1-3) “The method 200 leverages data fusion techniques to aggregate and correlate the detections of multiple sensing systems,” as well as that, (Paragraph [0025], Lines 6-10) “the data from the non-cooperative sensing systems may be used to discriminate obstacles that are on path (e.g., on the taxiway or runway) from ones that are out of path, and thus distinguish between obstacles representing a threat against ones which do not pose any danger to the aircraft.”
Franzini additionally teaches, (Paragraph [0103], Lines 2-5) “The system 300 is also configured to be used by a downstream system for controlling movement of the aircraft on the ground, such as in autonomous taxiing operations,” and that, (Paragraph [0086], Lines 4-7) “The alerts and indications include graphical items displayed on the HMIs 142 or data that are used by a guidance system for avoiding collisions with the detected objects.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the sensor data fusion system which provides corrective actions to a human operation in the context of aircraft taxiing, such as providing a collision avoidance measure as taught by Gu, with the system of Franzini which can explicitly implement automated control steps in the context of aircraft taxing after interpreting fused sensor data in order to implement collision avoidance with an obstacle, in order to yield predictable results.
Combining the references would yield the well-known convenience benefits of implementing automated control systems, as well as the safety benefits of obstacle detection and collision avoidance in the context of an aircraft taxiing operation. As Franzini describes, (Paragraph [0007]) “It is an aim to provide an improved system and method for detecting obstacles that may present a danger to an aircraft,” and that, (paragraph [0031]) “The output systems may also comprise ownship guidance systems (e.g., comprising taxi guidance systems), which are capable of providing automated control for movement of the aircraft on the aerodrome surface. As used herein, the term “ownship” may refer to one's own aircraft, e.g., the aircraft comprising a collision avoidance system.”
“wherein the computing device assigns the first sensor to the second sensor after computing an auction price for a detection of the first sensor.”
Gu and Franzini do not explicitly teach assigning a the first sensor to the second sensor after computing an auction price for a detection of the first sensors.
Gu, however, notably teaches that, (Paragraph [0126]) “It will be understood that the person skilled in the art would be able to conduct data fusion (e.g. sensor calibration, time-syncing) by a variety of methods or algorithms,” a plurality of which are listed in Table 2, Stages 1-5, for example, a joint probabilistic data association (JPDA) tracker.
Based upon the preceding establishment of the level of ordinary skill in the art, it would have been obvious to a person of ordinary skill in the art to arrive at the claimed invention in light of Au.
Au is directed to, (Title) “Challenges and Opportunities of Computer Vision Applications in Aircraft Landing Gear,” and teaches a scenario wherein, (Page 6, Right Column, 2nd Paragraph, Lines 1-5) “combining camera with LiDAR or radar, data fusion can also be achieved. As a technique used often in more autonomy applications, fusion incorporates data from different sensors, usual driven by their unique features, in order to produce a more insightful, emergent observation of the detected object.”
Au is relevant to the Applicant’s disclosure due to disclosing that sensor confidence algorithms are well-known in the field of a camera and LIDAR being fused together to provide information for the vision of an aircraft, Au specifically implementing a voting algorithm.
Au teaches, (Page 6, Right Column, 2nd and 3rd Paragraphs) “in the highest level, often only the ultimate decisions from the sensors are extracted and then involved in a voting process to give an overall result … One major benefit of data fusion that is perhaps more apparent in the high level voting implementation in addition to providing collaborative insights not possible to obtain otherwise, is that extra layers of redundancy can be achieved.” The described voting process is not the same as an explicit auction fusion algorithm, but it is a well-known technique which solves an identical assignment problem.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to implement a simple substitution wherein the sensor fusion algorithms contemplated within that of Gu are replaced with that of an auction fusion algorithm, in order to yield predictable results.
The disclosures of Gu and Au differ from that of the claimed invention in that they use different algorithms to assign the sensors.
However, Auction fusion algorithms are known in the art as being both common and capable of solving linear assignment problems amongst autonomous fleet vehicles/robotics to bid for vision assignments, see at least Zhang (US 2022/0250654 A1) which discloses, (Paragraph [0004]) “a vehicle of a connected fleet, or a computing system of the vehicle, can be selected to act as an auction system (e.g., an auctioneer) and send an announcement offering various perception tasks to the fleet for bidding.
Therefore, a person of ordinary skill in the art, as relayed by Gu, would have been capable of contemplating a different fusion technique to the familiar problem of autonomous vehicle perception, the results of implementing an auction fusion algorithm of which, would have obtained predictable results.
Claim 9 Discloses: (Original)
“The method of claim 8, wherein the computing device conducts signal processing on the first field of view and the second field of view independently prior to a fusion engine of the computing device fusing the respective fields of view.”
Gu teaches, (Paragraph [0111], Lines 1-11) “Referring particularly to FIGS. 1, 2 and 4, there are provided about 10 or more monitoring units 22 which each include one of each of the least two sensors 26, 28, 30 and which are provided in multiple locations throughout the aviation environment near and at airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates. Monitoring units 22 are mounted on aircraft 16 and/or in locations on ground service vehicles and/or ground support vehicles 18 and equipment, ground personnel 20.”
Gu additionally teaches, (Paragraph [0112], Lines 20-23) “the two sensor types 26, 28 work together to provide the information in both normal and challenging light conditions such as fog, low light, sun glare smoke-filled, and the like, within the sensor's field of view,” and that, (Paragraph [0118]) “Referring to FIGS. 1 and 2, the processing system 4 is further configured to process the sensor information including the following example steps of a method 100 and data processing step 104 for safe operation assessment in an aviation environment which is summarised in Table 2.”
PNG
media_image2.png
323
683
media_image2.png
Greyscale
Table 2 portrays that sensor calibration in Step A and Time Synchronization in Step B, both of which are example of image data signal processing, occurs for each sensor before data fusion occurs in Step C.
Claim 10 Discloses: (Original)
“The method of claim 8, wherein the computing device classifies at least one object with the information of the unified detection report.”
Gu teaches, (Paragraph [0129]) “In … Stage 3, as illustrated in Table 2, the processing system 4 is configured to detect and/or identify and classify objects in the aviation environment.”
Claim 11 Discloses: (Original)
“The method of claim 10, wherein the at least one object is classified as a dynamic object by the computing device.”
Gus teaches, (Paragraph [0045]) “The at least one object can be a moving or stationary object in the at least one location in the aviation environment including aircraft, ground support vehicles, ground crew, runway, taxiway, apron ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates, and the operating environment near and/or on the runway,” and that, (Paragraph [0057], Lines 1-4) “The processing system is preferably configured to associate the range information and the identity and/or classification information from the sensors to identify the at least one object in the field of view of the sensors.“
Claim 12 Discloses: (Original)
“The method of claim 10, wherein the at least one object is tracked over time by the computing device during the automation of the portions of the taxiing operation.”
Gu teaches, (Paragraph [0057], Lines 1-8) “The processing system is preferably configured to associate the range information and the identity and/or classification information from the sensors to identify the at least one object in the field of view of the sensors. The processing system may be configured to associate the at least one identified object with time information, thereby provides measurement and/or tracking at least one physical property of the at least one identified object over time.”
Gu additionally teaches, (Paragraph [0133]) “In a final processing stage … The system 2 is able to assess the properties of the compared objects with predetermined safe operation criteria and to generate an alert (in step 106, see FIG. 2) when the system 2 has determined that the predetermined safe operation criteria may or has been violated or otherwise deviated therefrom, and to provide suggested corrective or evasive actions to a number of users in the aviation environment, particularly near and at airports,” and provides an example wherein, (Paragraph [0158], Lines 5-7) “the system 2 is configured to send an alert to a pilot, ground personnel 20 to slow/stop and/or conduct collision avoidance measures,” which could be triggered by, for example, predicting the, (Page Table 4, Column 2, Lines 40-48) “likelihood of collision by monitoring distance between aircraft and other aircraft/terrain/ground vehicle/person/object and present and predicted aircraft tracks (e.g. path, travel direction, velocity). Alerts If likelihood of collision within next 20 seconds is high and persists for more than 2 seconds, generate alerts.”
Claim 14 Discloses: (Currently Amended)
“A method comprising:”
Gu teaches, (Abstract, Lines 1-2) “The present invention is directed to systems and methods for monitoring activities in an aviation environment.”
“detecting, with a computing device, a first field of view with a first sensor and a second field of view with a second sensor, each sensor mounted on an aircraft;”
Gu teaches, (Paragraph [0040]) “The at least two sensors can be housed in a monitoring unit and the monitoring unit is one of a plurality of spaced-apart monitoring units. One or more of said plurality of said monitoring units can be mounted at one or more locations throughout the aviation environment near and at an airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates, and aircraft.”
Gu additionally teaches, (Paragraph [0111], Lines 1-11) “Referring particularly to FIGS. 1, 2 and 4, there are provided about 10 or more monitoring units 22 which each include one of each of the least two sensors 26, 28, 30 and which are provided in multiple locations throughout the aviation environment near and at airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates. Monitoring units 22 are mounted on aircraft 16 and/or in locations on ground service vehicles and/or ground support vehicles 18 and equipment, ground personnel 20,” and that, (Paragraph [0057], Lines 1-4) “The processing system is preferably configured to associate the range information and the identity and/or classification information from the sensors to identify the at least one object in the field of view of the sensors,” as well as, (Paragraph [0112], Lines 20-23) “the two sensor types 26, 28 work together to provide the information in both normal and challenging light conditions such as fog, low light, sun glare smoke-filled, and the like, within the sensor's field of view.”
“fusing, with the computing device, the first field of view with the second field of view to form a unified detection report;”
Gu teaches capability, (Paragraph [0120], Lines 3-5) “to combine the two type of sensors' information by data fusion methods,” and that, (Paragraph [0131], Lines 3-5) “processed LiDAR sensor information which can result in a detection confidence score which can be associated with the sensor information,” and after the, (Paragraph [0132], Lines 1-2 & Paragraph [0133], Lines 2-8) “system 2 has detected, identified and/or classified the objects in the aviation environment … the system 2 is configured to combine the outputs of the processed sensor information from the previous steps/stages to measure, calculate and provide an output of the estimation or prediction of one or more objects' physical properties such as position, acceleration, speed and/or travel direction of any object(s) motion.”
PNG
media_image2.png
323
683
media_image2.png
Greyscale
“generating, with the computing device, an automation task in response to information in the unified detection report;”
Gu teaches, (Paragraph [0133]) “In a final processing stage … The system 2 is able to assess the properties of the compared objects with predetermined safe operation criteria and to generate an alert (in step 106, see FIG. 2) when the system 2 has determined that the predetermined safe operation criteria may or has been violated or otherwise deviated therefrom, and to provide suggested corrective or evasive actions to a number of users in the aviation environment, particularly near and at airports,” and provides an example wherein, (Paragraph [0158], Lines 5-7) “the system 2 is configured to send an alert to a pilot, ground personnel 20 to slow/stop and/or conduct collision avoidance measures.”
“determining, with the computing device, a destination for the automation task;”
Gu teaches, (Paragraph [0146], Lines 1-8) “the system 2 is configured to determine that the comparison shows that risk of runway excursion is medium or high, i.e. runway excursion may occur in the next 15 seconds, or in the next 5 seconds, and the system is further configured to transmit at least one alert to at least one user accordingly. The user(s) could include aviation traffic control (ATC), pilots, emergency response team, and the like.”
Gu additionally teaches, (Paragraph [0147], Lines 1-14) “as illustrated in FIG. 3, the system 2 in step 214 is configured to suggest corrective or mitigation actions if necessary, i.e. if it has been determined that the risk of runway excursion is not low but is medium or high to at least one appropriate user. For example, for take-off, the system 2 is configured to send an alert to at least one pilot of the aircraft to adjust power settings to accelerate the take-off or to abort the take-off. Accordingly, for landing, the system 2 can send an alert to at least one pilot to adjust power settings to slow down (e.g. reverse thrust), deploy spoiler and/or increase tyre braking, or conduct go around or touch and go. Similarly, for a runway veer-off, the alert could be sent to at least one pilot to steer the aircraft back to centreline from an off-centreline location.”
“and automating, with the computing device, portions of a taxiing operation by executing the automation task via the destination”
Gu does not explicitly teach complete automation of a taxiing operation of the aircraft in response to the information in the unified detection report.
However, Gu does teach under broadest reasonable interpretation an example of partial automation by generating an alert comprising suggested corrective or evasive actions for the aircraft to a human operation in response to the unified detection report.
Franzini does teach implementing a completely automated taxiing operation in the context of an aircraft of a runway.
Franzini teaches, (Abstract, Lines 1-2) “A collision avoidance system for aggregating and processing data when an aircraft is on or near the ground,” and that, (Paragraph [0090], Lines 1-3) “The method 200 leverages data fusion techniques to aggregate and correlate the detections of multiple sensing systems,” as well as that, (Paragraph [0025], Lines 6-10) “the data from the non-cooperative sensing systems may be used to discriminate obstacles that are on path (e.g., on the taxiway or runway) from ones that are out of path, and thus distinguish between obstacles representing a threat against ones which do not pose any danger to the aircraft.”
Franzini additionally teaches, (Paragraph [0103], Lines 2-5) “The system 300 is also configured to be used by a downstream system for controlling movement of the aircraft on the ground, such as in autonomous taxiing operations,” and that, (Paragraph [0086], Lines 4-7) “The alerts and indications include graphical items displayed on the HMIs 142 or data that are used by a guidance system for avoiding collisions with the detected objects.”
Franzini additionally teaches, (Paragraph [0024], Lines 4-8) “The support systems may also comprise taxi navigation and management systems which are capable of providing information about one or more of the aircraft's position, taxi route and the trajectory of other vehicles.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the sensor data fusion system which provides corrective actions to a human operation in the context of aircraft taxiing, such as providing a collision avoidance measure on the way to a destination as taught by Gu, with the system of Franzini which can explicitly implement automated control steps in the context of aircraft taxing after interpreting fused sensor data in order to implement collision avoidance with an obstacle, in order to yield predictable results.
Combining the references would yield the well-known convenience benefits of implementing automated control systems, as well as the safety benefits of obstacle detection and collision avoidance in the context of an aircraft taxiing operation. As Franzini describes, (Paragraph [0007]) “It is an aim to provide an improved system and method for detecting obstacles that may present a danger to an aircraft,” and that, (paragraph [0031]) “The output systems may also comprise ownship guidance systems (e.g., comprising taxi guidance systems), which are capable of providing automated control for movement of the aircraft on the aerodrome surface. As used herein, the term “ownship” may refer to one's own aircraft, e.g., the aircraft comprising a collision avoidance system.”
“, wherein the computing device assigns the first sensor to the second sensor after computing an auction price for a detection of the first sensor.”
Gu and Franzini do not explicitly teach assigning a the first sensor to the second sensor after computing an auction price for a detection of the first sensors.
Gu, however, notably teaches that, (Paragraph [0126]) “It will be understood that the person skilled in the art would be able to conduct data fusion (e.g. sensor calibration, time-syncing) by a variety of methods or algorithms,” a plurality of which are listed in Table 2, Stages 1-5, for example, a joint probabilistic data association (JPDA) tracker.
Based upon the preceding establishment of the level of ordinary skill in the art, it would have been obvious to a person of ordinary skill in the art to arrive at the claimed invention in light of Au.
Au is directed to, (Title) “Challenges and Opportunities of Computer Vision Applications in Aircraft Landing Gear,” and teaches a scenario wherein, (Page 6, Right Column, 2nd Paragraph, Lines 1-5) “combining camera with LiDAR or radar, data fusion can also be achieved. As a technique used often in more autonomy applications, fusion incorporates data from different sensors, usual driven by their unique features, in order to produce a more insightful, emergent observation of the detected object.”
Au is relevant to the Applicant’s disclosure due to disclosing that sensor confidence algorithms are well-known in the field of a camera and LIDAR being fused together to provide information for the vision of an aircraft, Au specifically implementing a voting algorithm.
Au teaches, (Page 6, Right Column, 2nd and 3rd Paragraphs) “in the highest level, often only the ultimate decisions from the sensors are extracted and then involved in a voting process to give an overall result … One major benefit of data fusion that is perhaps more apparent in the high level voting implementation in addition to providing collaborative insights not possible to obtain otherwise, is that extra layers of redundancy can be achieved.” The described voting process is not the same as an explicit auction fusion algorithm, but it is a well-known technique which solves an identical assignment problem.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to implement a simple substitution wherein the sensor fusion algorithms contemplated within that of Gu are replaced with that of an auction fusion algorithm, in order to yield predictable results.
The disclosures of Gu and Au differ from that of the claimed invention in that they use different algorithms to assign the sensors to one another.
However, Auction fusion algorithms are known in the art as being both common and capable of solving assignment problems amongst autonomous fleet vehicles/robotics to bid for vision assignments, see at least Zhang (US 2022/0250654 A1) which discloses, (Paragraph [0004]) “a vehicle of a connected fleet, or a computing system of the vehicle, can be selected to act as an auction system (e.g., an auctioneer) and send an announcement offering various perception tasks to the fleet for bidding.
Therefore, a person of ordinary skill in the art, as relayed by Gu, would have been capable of contemplating a different fusion technique to the familiar problem of autonomous vehicle perception, the results of implementing an auction fusion algorithm of which, would have obtained predictable results.
Claim 16 Discloses: (Original)
“The method of claim 14, wherein the automation task is a deviation in a predetermined taxiing speed.”
Gu teaches, (Paragraph [0141], Lines 13-16) “the system 2 is configured to track aircraft position 33 along a tracked path 37, to determine the predicted path 38, based on the measured and predicted aircraft position and speed.”
Gu additionally teaches, (Paragraph [0133]) “In a final processing stage … The system 2 is able to assess the properties of the compared objects with predetermined safe operation criteria and to generate an alert (in step 106, see FIG. 2) when the system 2 has determined that the predetermined safe operation criteria may or has been violated or otherwise deviated therefrom, and to provide suggested corrective or evasive actions to a number of users in the aviation environment, particularly near and at airports,” and provides an example wherein, (Paragraph [0158], Lines 5-7) “the system 2 is configured to send an alert to a pilot, ground personnel 20 to slow/stop and/or conduct collision avoidance measures.”
Claim 17 Discloses: (Reverted to Original)
“The method of claim 14, wherein the destination is a manual operator of the aircraft.”
Gu teaches, (Paragraph [0147], Lines 1-14) “as illustrated in FIG. 3, the system 2 in step 214 is configured to suggest corrective or mitigation actions if necessary, i.e. if it has been determined that the risk of runway excursion is not low but is medium or high to at least one appropriate user. For example, for take-off, the system 2 is configured to send an alert to at least one pilot of the aircraft to adjust power settings to accelerate the take-off or to abort the take-off. Accordingly, for landing, the system 2 can send an alert to at least one pilot to adjust power settings to slow down (e.g. reverse thrust), deploy spoiler and/or increase tyre braking, or conduct go around or touch and go. Similarly, for a runway veer-off, the alert could be sent to at least one pilot to steer the aircraft back to centreline from an off-centreline location.”
Claim 18 Discloses: (Reverted to Original)
“The method of claim 14, wherein the computing device communicates the automation task to the manual operator via a visual message.”
Gu teaches, (Paragraph [0169], Lines 1-3) “as illustrated in FIG. 9 in step 414, the system 2 is configured to suggest corrective or mitigation actions if necessary,” and, (Paragraph [0171]) “can provide real-time monitoring of aviation activities, detection of unsafe aviation activity and generation of alerts, which can be displayed on at least one standalone screen or can be integrated with existing systems located in at least a cockpit of said aircraft, air traffic control towers/centres, ground control locations and airport emergency response team locations. The display format may include 3-D map and panoramic view.”
Claim 20 Discloses: (Reverted to Original)
“The method of claim 14, wherein the automation of the portions of the taxiing operation is conducted without involvement of a manual operator of the aircraft.”
Gu does not explicitly teach complete automation of the taxiing operations. However, Gu does teach the following.
Gu teaches, (Paragraph [0117]) “The terms “artificial intelligence” and “intelligent algorithms” are used herein to refer to and encompass systems of data processing and analysis that are conducted by computers capable of harvesting large amounts of possible input data, including images and other information from monitoring and sensing devices, that may be processed, analysed, and categorized based on a set of rules and then may be communicated so that appropriate action may be taken, whether automatically by a system receiving the processed and analysed data or manually by at least one human operator such as Air Traffic Control officer, pilot and emergency response team.”
Franzini does explicitly teach implementing an automated taxiing operation in the context of an aircraft of a runway.
Franzini teaches, (Paragraph [0103], Lines 2-5) “The system 300 is also configured to be used by a downstream system for controlling movement of the aircraft on the ground, such as in autonomous taxiing operations,” and that, (Paragraph [0018]) “In some examples, the collision avoidance system may support either manned or unmanned aircraft. For example, the system may support a pilot and/or flight crew on a manned aircraft, or the system may be used by a downstream system for controlling movement of the aircraft on the ground, such as in autonomous taxiing operations.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the sensor data fusion system which provides corrective actions to a human operation in the context of aircraft taxiing, such as providing a collision avoidance measure as taught by Gu, with the system of Franzini which can explicitly implement automated control steps in the context of aircraft taxing after interpreting fused sensor data in order to implement collision avoidance with an obstacle, in order to yield predictable results.
Combining the references would yield the well-known convenience benefits of implementing automated control systems, as well as the safety benefits of obstacle detection and collision avoidance in the context of an aircraft taxiing operation. As Franzini describes, (Paragraph [0007]) “It is an aim to provide an improved system and method for detecting obstacles that may present a danger to an aircraft,” and that, (paragraph [0031]) “The output systems may also comprise ownship guidance systems (e.g., comprising taxi guidance systems), which are capable of providing automated control for movement of the aircraft on the aerodrome surface. As used herein, the term “ownship” may refer to one's own aircraft, e.g., the aircraft comprising a collision avoidance system.”
Claim 21 Discloses: (New)
“The sensing system of claim 1, further comprising a third sensor, wherein the second sensor is a first optic sensor having a first optical configuration and the third sensor is a second optic sensor having a second optical configuration.”
Gu teaches, (Paragraph [0111], Lines 1-11) “Referring particularly to FIGS. 1, 2 and 4, there are provided about 10 or more monitoring units 22 which each include one of each of the least two sensors 26, 28, 30 and which are provided in multiple locations throughout the aviation environment near and at airport including runway, taxiway, apron, ramp areas, passenger boarding bridges, ground service vehicles, ground support vehicles, ground crew, airport building structures including gates. Monitoring units 22 are mounted on aircraft 16 and/or in locations on ground service vehicles and/or ground support vehicles 18 and equipment, ground personnel 20.” Examiner is mapping the LIDAR sensor (26) as the second sensor which is a first optical sensor and the camera sensor (28) as the third sensor which is a second optical sensor as portrayed in Fig. 2 as presented in Claim 21.
PNG
media_image1.png
337
599
media_image1.png
Greyscale
Gu additionally teaches, (Paragraph [0112], Lines 28-32) “Other sensor types 30 may be provided in the monitoring unit 22 and/or information acquired using other sensor types may be provided for the purposes of enhancing the system 2 or providing redundancies.”
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Gu in view of Franzini, further in view of Au, further in view of Cros et al. (US 2016/0005319 A1, hereinafter Cros)
Claim 6 Discloses: (Original)
“The sensing system of claim 1, wherein the first sensor is an optic sensor with a first lens and the second sensor is an optic sensor with a second lens, the first lens differing from the second lens.”
Gu does not teach an explicit embodiment wherein the first sensor is an optic sensor with a first lens and the second sensor is an optic sensor with a second lens, the first lens differing from the second lens.
However, Gu does teach an embodiment wherein each sensor is an optic sensor.
Gu teaches, (Paragraph [0035], Lines 1-3) “The at least two sensors can … include camera sensors.”
Franzini does not teach the limitations of claim 6, but does teach data fusion for an aircraft taxiing operation in the context of multiple sensors with overlapping fields of view.
Franzini teaches, (Paragraph [0090]) “The method 200 leverages data fusion techniques to aggregate and correlate the detections of multiple sensing systems with the prior knowledge of the environment. For example, when the fields of view or coverage areas of two or more sensing systems overlap, the detections of objects in the overlapping regions are correlated to identify a set of unique objects in the region. This provides accurate information regarding the position and velocity of each object, leveraging the availability of multiple detections and associated measurements for the same object.”
Au does not explicitly teach the first lens differencing the from the second lens as claimed.
Cros does teach an aircraft taxing operation in which multiple cameras are used to generated coverage in an overlapping view, at least one camera comprising a wide angle lens in comparison to the rest.
Cros teaches, (Abstract, Lines 1-3) “A method for assisting the piloting of an aircraft on the ground comprises the steps of obtaining a panoramic view of at least one area adjacent to the aircraft,” and that, (Paragraph [0014], Lines 1-2) “According to a variant, the method comprises a step of detection of an obstacle from the panoramic view.”
Cros additionally teaches, (Paragraph [0051], Lines 1-6) “According to another embodiment, the image acquisition device 38 comprises several cameras distributed over different areas of the fuselage 10. According to a first case, the image acquisition device 38 comprises image processing in order to reconstitute a panoramic view from the views 40 captured by the different cameras,” and that, (Paragraph [0053]) “According to another embodiment, the image acquisition device 38 comprises at least one camera with a wide angle lens.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the aircraft taxiing vision system which may comprise two different optical sensors which are capable of identifying obstacles as taught by Gu, the ability to overlap the field of view of different optical sensors in order to identify potential obstacles in an aircraft taxiing operation as taught by Franzini, and the sensor fusion system of Au, with the particular embodiment of a multiple camera system used in an aircraft taxiing operation having at least one camera comprising a wide angle lens to improve a field of view of a overlapping images which are combined into a panoramic view which can also identify obstacles, as taught by Cros, in order to yield predictable results.
Combining the references would yield the improved field of view benefits by at least having one wide angle lenses as part of a panoramic imaging system for obstacle detection in an aircraft taxiing operation. As Cros describes, (Paragraph [0038]) “The windscreen 22 provides the pilot with a more or less restricted field of view of the environment outside of the aircraft,” and that, (Paragraph [0049]) “Advantageously, the panoramic view shows a part of the taxiway close to at least one landing gear of the aircraft,” allowing a recognition application, (Paragraph [0087], lines 7-9) “to recognize a possible obstacle and to emit a signal if this possible obstacle is positioned in a given area.” A wide angle lens directly contributes to the advantageous panoramic field of view.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Gu in view of Franzini, further in view of Au, further in view of Shyman et al. (US 2023/0076554 A1, hereinafter Shyman)
Claim 15 Discloses: (Original)
“The method of claim 14, wherein the automation task is a deviation in a predetermined taxiing route between a runway and a designated aircraft parking region.”
Gu, Franzini, and Au do not teach an automation task wherein the task is a deviation in a predetermined taxiing route between a runway and a designated aircraft parking region.
Shyman does teach an automation task wherein the task is a deviation in a predetermined taxiing route between a runway and a designated aircraft parking region.
Shyman teaches, (Paragraph [0025]) “FIG. 1 illustrates an example environment that includes 1) a partial airport 100 (e.g., an airport runway 102, taxiway 104, and other roads/surfaces)… systems/devices (e.g., a path planning system 112) that generate a taxiing path plan for the aircraft 108. The aircraft 108 may include an aircraft control system (e.g., a taxiing control system 114) that controls the aircraft 108 according to the generated taxiing path plan, such that the aircraft 108 may automatically traverse the generated taxiing path plan without additional operator intervention. In some implementations, the taxiing path plan may include a list of coordinates and other data for the aircraft autopilot (e.g., auto -taxiing features of the autopilot system),” and that, (Paragraph [0026], Lines 12-15) “Example starting/destination locations for a taxiing path plan may include, but are not limited to, a hangar, parking, other aircraft storage, cargo pickup/drop-off locations, and/or passenger pickup/drop-off locations.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems capable of implementing aircraft automation tasks of Gu, Franzini, and Au, with the system of Shyman which can explicitly automate a taxiing aircraft to park in a designated area, in order to yield predictable results.
Combining the references would yield the well-known convenience benefits of automation to an aircraft taxiing operation, which frequently includes for example the parking of an aircraft in a vehicle hanger in the day to day operations of an airport. As Shyman describes, (Paragraph [0099], Lines 1-9) “As described herein, the functionality associated with the taxiing control system 1400 and the path planning system 112 may be implemented for manned or unmanned aircraft. In the case of a manned aircraft (e.g., a piloted aircraft), the path planning and automatic taxiing features may be used as an alternative form of aircraft control for the onboard operator/pilot. As another example, an onboard operator/pilot may use the path planning and automatic taxiing features as a convenience feature for taxiing.”
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Gu in view of Franzini, further in view of Au, further in view of Conway (US 2021/0181764 A1, hereinafter Conway).
Claim 19 Discloses: (Original)
“The method of claim 14, wherein the destination is an automation circuit of the computing device.”
Gu does not explicitly teach a potential destination being an automation circuit of the computing device. However does teach its system which is capable of taking appropriate actions automatically as a potential destination.
Gu teaches, (Paragraph [0117]) “The terms “artificial intelligence” and “intelligent algorithms” are used herein to refer to and encompass systems of data processing and analysis that are conducted by computers capable of harvesting large amounts of possible input data, including images and other information from monitoring and sensing devices, that may be processed, analysed, and categorized based on a set of rules and then may be communicated so that appropriate action may be taken, whether automatically by a system receiving the processed and analysed data or manually by at least one human operator such as Air Traffic Control officer, pilot and emergency response team.”
Franzini and Au do not explicitly teach an automation circuit.
Conway does explicitly teach an automation circuit.
Conway teaches, (Abstract, Lines 1-4) “A method, computing system and computer program product are provided to identify the alignment of an aircraft with an incorrect surface, such as a taxiway or a runway that is closed or unsuited for current conditions,” and that, (Paragraph [0036], Lines 1-10) “the computing system 20, such as the processing circuitry 22, receives information regarding various operational parameters of the aircraft, such as from the aircraft, e.g., from a flight management computer, an autopilot system or the like. These operational parameters may include, for example, the speed of the aircraft, the rate of descent of the aircraft, the trajectory of the aircraft, e.g., the accuracy with which the trajectory of the aircraft must be aligned with the central axis of a runway, etc,” as well as, (Paragraph [0032], Lines 19-25) “In an instance in which the flight of the aircraft is controlled or assisted in an automated manner, such as by flight management computer, an autopilot system or the like, the computing system 20, such as the processing circuitry 22, may be configured to interact via the communication interface 26 with the flight management computer.”
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to combine the automatic reception of data as taught by Gu with the explicit reception via an automation circuit of an aircraft traveling on a taxiway as taught by Conway, in order to yield predictable results.
Combining the references would yield the benefits of utilizing a processing circuit, which is an extremely well-known system in the art capable of executing instructions, as the receiving destination for data in the context of an aircraft traveling on a taxiway. This preceding structure allows potential errors in operational parameters to be determined, allowing automated corrections to be made. As Conway describes, (Paragraph [0036], Lines 10-16) “The computing system 20, such as the processing circuitry 22, of this example embodiment is configured to compares the operational parameters, including the trajectory of the aircraft, with the predefined procedures of the airport in order to determine whether the operational parameters of the aircraft are consistent with and satisfy the predefined procedures of the airport,” and that, (Paragraph [0038], Lines 1-4) “Based upon the alert, the trajectory of the aircraft may be modified, such as by the pilot, by the flight management computer or autopilot system, in response to direction from an air traffic controller or the like.”
RELEVANT, BUT NOT CITED PRIOR ART
The prior art made of record and not relied upon is considered pertinent to applicant'sdisclosure.
Gabriel et al. (US 2020/0118450 A1) teaches, (Paragraph [0061], Lines 10-12) “The aircraft 100 may include a plurality of sensors 618 that generate sensor data. For example, the aircraft 100 may include one or more radar systems 620, one or more electro-optical (E/O) cameras 622, one or more infrared (IR) cameras 624, and/or light detection and ranging systems (Lidar) 626. The Lidar systems 626 may measure distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. The radar systems 620 and cameras 622, 624 may detect other aircraft. Additionally, the sensors 618 (e.g., Lidar and cameras) may determine whether the runway is clear when approaching for a landing. “
Zhang (US 2022/0250654 A1) teaches, (Paragraph [0004], Lines 1-12) “a vehicle of a connected fleet, or a computing system of the vehicle, can be selected to act as an auction system (e.g., an auctioneer) and send an announcement offering various perception tasks to the fleet for bidding. The tasks can include providing information such as the bidder's position and orientation, the sender's sensor data and field-of-view information, and object detection lists (e.g., position, orientation, and motion information for detected objects). The vehicles of the fleet can determine whether they have the communication and computational capabilities to perform the tasks and, if so, the auction system can accept a bid to perform one or more tasks.”
Wang et al. (Bidding Protocols for Deploying Mobile Sensors) teaches, (Abstract) “Constructing a sensor network with a mix of mobile and static sensors can achieve a balance between sensor coverage and sensor cost. In this paper, we design two bidding protocols to guide the movement of mobile sensors in such sensor networks to increase the coverage to a desirable level. In the protocols, static sensors detect coverage holes locally by using Voronoi diagrams and bid mobile sensors to move. Mobile sensors accept the highest bids and heal the largest holes. Simulation results show that our protocols achieve suitable trade-off between coverage and sensor cost.”
Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the timing fee set forth in 37 CFR 1.17(p) on 12/17/2025 prompted the new ground of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 609.04(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER V. GENTILE whose telephone number is (703)756-1501. The examiner can normally be reached Monday - Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito R. Robinson can be reached at (571)270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER V GENTILE/Examiner, Art Unit 3664
/KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664