DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-3, 7, 9-13, 17, 19-26, and 33-34 are pending and examined below. This action is in response to the claims filed 2/4/26.
Response to Amendment
Applicant’s arguments, see Applicant Remarks 35 USC § 103. filed on 2/4/26, regarding 35 USC § 103 rejections are persuasive in view of amendments filed 2/4/26.
However, upon further consideration, new grounds of rejection are made in view of Altman (US 2021/0114616) below.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1, 3, 7, 9-11, 13, 17, and 19-26 are rejected under 35 U.S.C. 103 as being unpatentable over Corban et al. (US 2018/0290748) in view of Altman (US 2021/0114616).
Regarding claims 1 and 11, Corban discloses an autonomous aircraft system including a system/method for remote operation of a vehicle by human operator, comprising (Abstract and ¶18):
the vehicle comprising a fly-by-wire actuation of a control, wherein the fly-by-wire actuation is implemented by an on-board processor that is configured to (¶15-18 - mission computer can be located either on-board the sUAS for generating guidance commands):
i) receive a command from the human operator located at a control station via a bidirectional wireless communications system (¶14-15 and ¶43 – Secure, high-bandwidth communications between the drone and the operator station can be achieved using miniature, disposable ad hoc radio network nodes intelligently released from the aircraft as it progresses through the space for receiving manual control instructions from an operator at the operator station corresponding to the recited control station),
ii) interpret the command into a desired vehicle state (¶14-15 - onboard mission computer generates control signals from the manual user commands corresponding to the recited interpret the command into a desired vehicle state),
iii) calculate a position for one or more elements of the control interface based at least in part on the desired vehicle state (¶14-15 - Guidance commands are generated by the mission computer and transmitted to the autopilot 104 to autonomously maneuver the sUAS corresponding to the recited calculate a position for one or more elements of the control interface based at least in part on the desired vehicle state);
the bidirectional wireless communications system configured to transmit the command from the control station to the vehicle, and receive data related to the vehicle's state and an environment from the vehicle (¶14 - The communication suite 105 can transmit and receive data to and from the ground station through a tether 106 or wirelessly where the data ),
wherein the bidirectional wireless communications system comprises a combination of a plurality of links including (a) a satellites network communication link, and (b) at least one of a direct radio frequency communication link and a terrestrial wireless communication link (¶18 and ¶43 – communications network includes GPS corresponding to the recited satellite network communications link, radio receivers and transceivers corresponding to the recited direct radio frequency communication, and Wi-Fi/4G communications corresponding to the recited terrestrial wireless communication link where bidirectional wireless communication may occur with the command station and/or satellites via miniature, disposable ad hoc radio network nodes corresponding to the recited combination of a plurality of links), and
configured to duplicate critical telemetry data and broadcast over the plurality of links (¶25 – all pictures, videos, maps, and reports are archived on the ground stations high capacity data storage device and made readily accessible through the ANT app on any mobile device with a wifi connection to the ground station where the data being archived and live streamed corresponding to the recited duplicating critical telemetry data which is then available to all connected devices corresponding to the recited broadcasted over the plurality of links); and
a human interface device located at the control station remote from the vehicle, wherein the human interface device is configured to display a live virtual view constructed based at least in part on image data received from the vehicle (¶14 - The live video stream is received by the ground station and displayed on a virtual reality, or augmented reality headset connected to the ground station).
While Corban does disclose utilizing a plurality of sensors from a plurality of drones combined into a single usable map, it does not explicitly disclose utilizing a bidirectional satellite communication link, a multiplexing gateway, or specifics regarding outbound/inbound data through the multiplexing gateway as claimed, however Altman discloses a wireless multiple-link vehicular communications system including a bidirectional wireless communications system comprises a combination of a plurality of links including (a) a satellites network communication link (¶35 - a satellite communications transceiver corresponding to the recited bidirectional satellite network communication link)
comprises a multiplexing communication gateway (¶54 – on vehicle multiplexing data to combine before transmitting the data corresponding to the recited multiplexing gateway) configured to route data by
a) duplicating outbound critical telemetry data over the plurality of links and broadcasting over the plurality of lines (¶27, ¶40, and ¶70 – replicate mode corresponding to the recited duplicating outbound data including telemetry data and broadcasting the data), and
b) switching inbound data from the plurality of links based at least in part on a priority or a latency of the plurality of links (¶40, ¶54, ¶70-75, and ¶95 – switching mode corresponding to the recited switching between links as part of a downlink stream into the vehicle corresponding to the recited inbound data because this particular link has a low latency at that time period which may be important for that specific vehicle-related application corresponding to the recited latency of the plurality of links including utilizing priority status of the data).
The combination of the augmented reality drone control and mapping system of Corban with the multiplexing/multi-link data transmission systems of Altman fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the augmented reality drone control and mapping system of Corban with the multiplexing/multi-link data transmission systems of Altman in order to reduce the transmitted bandwidth and thus increasing reliability and decreasing latency, or in order to do on-vehicle processing that may otherwise be a resource constraint on the delivery networks or on the end-node processors (e.g., by performing or initiating Mobile Computing at the Edge or Mobile Edge Computing, MEC), or to create a coherent, time and geo-location accurate and synchronized image (or video, or map, or other multimedia reflection of what the relevant sensors sense of the inside or outside reality or events) from all relevant sensors thus overcoming time or geo-location synchronization issues that many time results from un-synchronized transmission and delivery end-to-end. (Altman - ¶54).
Regarding claims 3 and 13, Corban further discloses wherein the vehicle comprises one or more processors to process sensor data collected by sensors onboard the vehicle, wherein the sensor data is processed by a machine learning algorithm trained model and wherein the processed sensor data comprises an object identified by the machine learning algorithm trained model (¶14-16 and ¶38-41 – sensor configuration corresponding to the recited sensors onboard the vehicle utilizes machine vision for mapping and obstacle avoidance corresponding to the recited object identified by the machine learning algorithm trained model processed in the avionics suite onboard the vehicle).
Regarding claims 4 and 14, Corban further discloses wherein the sensor data is processed by a machine learning algorithm trained model and wherein the processed sensor data comprises an object identified by the machine learning algorithm trained model (¶38-41 – sensor configuration corresponding to the recited sensors onboard the vehicle utilizes machine vision for mapping and obstacle avoidance corresponding to the recited object identified by the machine learning algorithm trained model).
Regarding claims 7 and 17, Corban further discloses wherein the control station is stationary or mobile (Fig. 2 – ground control station is a box with a handle therefor can be either stationary or mobile).
Regarding claims 9 and 19, Corban further discloses wherein the live virtual view is adaptively displayed according to a measurement of a movement of the human operator's head and/or eyes (¶17-20 – live video streams are viewed through VR headset for head movement tracking based controls. The claim element “and/or” only requires one of the following to be present to disclose the elements as claimed).
Regarding claims 10 and 20, Corban further discloses wherein the live virtual view is 720 degree (¶20 – employment of one or more 360 degree cameras ensuring 100% view corresponding to the recited 720 degree live virtual view).
Regarding claim 21, Corban further discloses wherein a plurality of the vehicles are operated by a network of human operators, comprising (¶26 and ¶32 - multiple drones, autonomously collaborating based on human operators):
wherein a computer system located at the control station is configured to aggregate the data received from the plurality of vehicles and display information to the network of human operators via a plurality of the human interface devices (¶32 and ¶36 - a base unit (containing multiple drones) autonomously collaborating to map an area corresponding to the recited aggregate data from the plurality of vehicles and display information to the network of human operators).
Regarding claim 22, Corban further discloses wherein the information is processed from data collected by complementary sensors located onboard different vehicles (¶36 – maps are generated from the collaboration of multiple drones corresponding to the recited data collected by complementary sensors onboard different vehicles).
Regarding claim 23, Corban further discloses wherein at least one human operator is selected from the network of human operators and dynamically assigned to operate a vehicle from the plurality of vehicles (¶32 and ¶36 – human operators associated with a newly selected drone corresponding to the recited at least one human operators selected from the network of human operators dynamically assigned to operate a vehicle).
Regarding claim 24, Corban further discloses wherein the respective command is generated using a machine learning algorithm trained model based on the data aggregated from the plurality of vehicles (¶38 and ¶50 – obstacle avoidance utilizing machine vision corresponding to the recited command is generated using a machine learning algorithm trained model utilizing local maps which are shared with nearby aircraft corresponding to the recited based on the data aggregated from the plurality of vehicles).
Regarding claim 25, Corban further discloses wherein a command for controlling a first vehicle from the plurality of vehicles is generated based at least in part on a behavior of a second vehicle from the plurality of vehicles (¶32 – lead drone nominates a task for another drone to map the branch not taken by it corresponding to the recited command for controlling a first vehicle from the plurality of vehicles is generated at least in part on a behavior of a second vehicle).
Regarding claim 26, Corban further discloses wherein at least one of the plurality of human interface devices is configured to display the information and receive input from an active user from the network of human operators for controlling a respective vehicle (¶13-14 - The live video stream is received by the ground station and displayed on a virtual reality, or augmented reality headset connected to the ground control station for sending commands to the vehicle) and
at least one of the plurality of human interface devices is configured to only display the information to a passive user from the network of human operators (¶38 – high resolution video can be reviewed or analyzed corresponding to the recited passive user display only).
Claims 2, 12, and 33-34 are rejected under 35 U.S.C. 103 as being unpatentable over Corban et al. (US 2018/0290748) in view of Altman (US 2021/0114616), as applied to claims 1 and 11 above, further in view of Builta (US 2020/0064867).
Regarding claims 2 and 12, Corban further discloses the vehicle is a helicopter (¶13 – VTOL corresponding to the recited helicopter) and
Corban does not disclose the use of a swashplate based rotor control system, however Builta discloses a flight control system for a tiltrotor aircraft including wherein the vehicle comprises a swashplate-based rotor control system configured to translate an input into a pitch control of a main rotor blade (¶40 - In response to a control input for a change in lateral velocity, such as a pilot pushing sideways on the cyclic control, FCS 36 commands the lateral cyclic swashplate controls for directing thrust vectors 49A, 49B of rotors 37A, 37B in a lateral direction).
The combination of the autonomous flight control system of Corban in view of Altman with the utilization of swashplate control in a passenger tiltrotor aircraft of Builta fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the autonomous flight control system of Corban in view of Altman with the utilization of swashplate control in a passenger tiltrotor aircraft of Builta in order to enhanced accuracy of control of the aircraft (Builta - ¶45).
Regarding claims 33 and 34, Corban does not disclose the vehicle can accommodate a human occupant however Builta further discloses the vehicle has a size or dimensions to accommodate a human occupant (¶46 and Fig. 11).
The combination of the autonomous flight control system of Corban with the utilization of swashplate control in a passenger tiltrotor aircraft of Builta fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the autonomous flight control system of Corban in view of Altman with the utilization of swashplate control in a passenger tiltrotor aircraft of Builta in order to enable and enhance automatic launch and recovery of passengers from dangerous conditions (Builta - ¶45).
Additional References Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Nguyen et al. (US 2001/0043574) discloses a two-way satellite system including utilizing a satellite multiplexing gateway for traffic to be transmitted on the uplink (¶58).
discloses a dynamic wireless multiplexing switching hub for two way communications including utilizing a multiplexor gateway modem to allow for full duplex communication (Col 5).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew J Reda whose telephone number is (408)918-7573. The examiner can normally be reached Monday - Friday 7-4 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW J. REDA/ Primary Examiner, Art Unit 3665