DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the application filed on December 10th, 2025. Claims 1-2, 4-5, 7-9, 11-12, 14-19, and 21-25 are presently pending and are presented for examination.
Priority
Acknowledgment is made of applicant’s claim for provisional priority to 63/502309 dated May 15th, 2023.
Response to Amendment
In response to applicant’s response filed on December 10th, 2025; Examiner withdraws the previous claim objection; withdraws the previous 112(f) claim interpretations; withdraws the previous 112(b) claim rejection; withdraws the previous 35 U.S.C. 101 rejection; and withdraws the previous 35 U.S.C. 102 and 103 prior art rejections.
Response to Arguments
Applicant’s arguments filed December 10th, 2026 have been fully considered.
Applicant argues with respect to the claim objection of claim 1 that the amended claim overcomes the claim objection, on Page 9 of Applicant’s remarks. Examiner agrees and the previous claim objection is withdrawn.
Applicant argues with respect to claims 11 and 17 that the amended claims recite sufficient structural features as described in the specification to overcome the 112(f)-claim interpretation. Examiner agrees and the previous 112(f) claim interpretation is withdrawn.
Applicant argues with respect to claim 18 that the amended claims recite sufficient amendments to overcome the 112(b)-claim rejection. Examiner agrees and the previous 112(b) claim rejection is withdrawn.
Applicant argues with respect to claims 1, 3-8, 10-13, and 15-20 the amended claims recite sufficient amendments to overcome the 101-claim rejection. Examiner agrees and the previous 101 claim rejection is withdrawn.
Regarding the arguments provided for the rejection of claim 1, as put forth on pages 10 and 11 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues claim 1, “Movert does not teach or suggest checking parameters of a spacecraft attitude or an approach distance, a velocity deviation, or an amount of propellant used as recited in amended claim 1. Instead, Movert teaches performing a safety check on a predicted near future path…Applicant respectfully requests reconsideration and withdrawal of the rejection of claim 1 as amended” (remarks pg. 10-11).
As to point (e), Examiner partially agrees. In view of Applicant’s amendment and arguments, Examiner agrees that the prior art previously applied does not overcome the amendment. The newly added limitations are not taught by Movert. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of “A Hybrid Closed-Loop Guidance Strategy for Low-Thrust Spacecraft Enabled by Neural Networks” (hereinafter, “LaFarge”).
Regarding the arguments provided for the rejection of claim 11, as put forth on pages 11 and 12 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues claim 11, “Two important differences between amended claim 11 and the vehicle path prediction of Movert are: 1) navigation of the spacecraft is scheduled (and hence modeled) for an entire space mission prior to the start of the space mission, and 2) between navigation functions being performed, the spacecraft coasts for durations that are relatively long (e.g., days to months) when compared to driving an automobile. Amended claim 11 recites "scheduling operations for controlling a timing of functions of a space mission" and "commanding the spacecraft to coast until a next function of the space mission according to the scheduling", which are not applicable to automotive driving and therefore not taught or suggested by Movert” (remarks pg. 11-12).
As to point (f), Examiner partially agrees. In view of Applicant’s amendment and arguments, Examiner agrees that the prior art previously applied does not overcome the amendment. The newly added limitations are not taught by Movert. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of “A Hybrid Closed-Loop Guidance Strategy for Low-Thrust Spacecraft Enabled by Neural Networks” (hereinafter, “LaFarge”).
Regarding the arguments provided for the rejection of claim 17, as put forth on pages 12 and 13 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues claim 17, “Similar to amended claim 11, claim 17 now recites "the spacecraft coasts while a corrective action is taken by the autonomous control executive", which is not taught or suggested by Movert. Additionally, amended claim 17 recites "controlling a timing of functions at predetermined times for a duration of an entire space mission", which is also not taught or suggested by Movert” (remarks pg. 12-13).
As to point (g), Examiner partially agrees. In view of Applicant’s amendment and arguments, Examiner agrees that the prior art previously applied does not overcome the amendment. The newly added limitations are not taught by Movert. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of “A Hybrid Closed-Loop Guidance Strategy for Low-Thrust Spacecraft Enabled by Neural Networks” (hereinafter, “LaFarge”).
Regarding the arguments provided for the rejection of claim 2 and 4-5, as put forth on page 13 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues claim 2 and 5, “Claims 2 and 4-5 depend from claim 1 and should be allowable for at least the reasons stated above with respect to amended claim 1. Claim 2 is amended to recite "the navigation control comprises instructions for the spacecraft to coast when no spacecraft maneuver is needed". Similarly, claim 5 is amended to recite "commanding the spacecraft to coast while taking the corrective action". Support is found in at least paragraph [0026] of the Application as filed. Movert does not teach or suggest commanding a spacecraft to coast.” (remarks pg. 13).
As to point (h) see point (e).
Regarding the arguments provided for the rejection of claim 12, as put forth on page 13 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues claim 12, “Claim 12 depends from claim 11 and should be allowable for at least the reasons stated above with respect to amended claim 11. Additionally, claim 12 is amended to recite that "the maneuver design output state comprises a plurality of neural network outputs to be executed as a series of steps prior to a next navigation state", which is not taught or suggested by Movert” (remarks pg. 13).
As to point (i) see point (f).
Regarding the arguments provided for the rejection of claim 19, as put forth on page 13 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues claim 19, “Claims 19 depends from claim 17 and should be allowable for at least the reasons stated above with respect to amended claim 17” (remarks pg. 13).
As to point (j) see point (g).
Regarding the arguments provided for the rejection of claim 7-9, as put forth on page 14 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues, “Claims 7-9 depend from claim 1 and should be allowable for at least the reasons stated above with respect to amended claim 1. Santoni also does not or suggest "checking that parameters are within a predetermined range, the parameters comprising one or more of an approach distance, a spacecraft attitude, a velocity deviation, and an amount of propellant used", as recited in amended claim 1. Therefore, Santoni cannot compensate for the deficiencies of Movert with respect to amended claim 1 whether taken alone or in combination” (remarks pg. 14).
As to point (k) see point (e).
Regarding the arguments provided for the rejection of claim 15-16, as put forth on page 14 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues, “Claims 15 and 16 depend from claim 11 and should be allowable for at least the reasons stated above with respect to amended claim 11. Santoni also does not or suggest "scheduling operations for controlling a timing of functions of a space mission" and "commanding the spacecraft to coast until a next function of the space mission according to scheduling the operations", as recited in amended claim 11. Therefore, Santoni cannot compensate for the deficiencies of Movert with respect to amended claim 11 whether taken alone or in combination” (remarks pg. 14).
As to point (l) see point (f).
Regarding the arguments provided for the rejection of claim 14, as put forth on pages 14 and 15 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues, “Claim 14 depends from claim 11 and should be allowable for at least the reasons stated above with respect to amended claim 11. Drexler also does not or suggest "scheduling operations for controlling a timing of functions of a space mission" and "commanding the spacecraft to coast until a next function of the space mission according to scheduling the operations", as recited in amended claim 11. Therefore, Drexler cannot compensate for the deficiencies of Movert with respect to amended claim 11 whether taken alone or in combination” (remarks pg. 15).
As to point (m) see point (f).
Regarding the arguments provided for the rejection of claim 18, as put forth on page 15 of applicant’s remarks, the applicant’s arguments have been fully considered. Applicant argues, “Claim 18 depends from claim 17 and should be allowable for at least the reasons stated above with respect to amended claim 17. Harvey also does not or suggest "the spacecraft coasts while a corrective action is taken by the autonomous control executive" and "controlling a timing of functions at predetermined times for a duration of an entire space mission", as recited in amended claim 17. Therefore, Harvey cannot compensate for the deficiencies of Movert with respect to amended claim 17 whether taken alone or in combination” (remarks pg. 15).
As to point (n) see point (g).
Claim Objections
Claims 7 and 8 are objected to because of the following informalities:
Claim 7 recites “determined to be with the predetermined bounds”. Examiner believes it should recite “determined to be within the predetermined bounds”.
Claim 8 recites “taking a corrective action”. Examiner believes it should recite “taking the corrective action”.
Claim 14 recites “perform a safety maneuver”. Examiner believes it should recite “perform the safety maneuver”.
Claim 24 recites “and the a smaller neural network”. Examiner believes it should recite “and the .
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 2, the phrase "comprises instructions for the spacecraft to coast when no spacecraft maneuver is needed" renders the claim indefinite because it is unclear what is being defined as a spacecraft maneuver. Under broadest reasonable interpretation, Examiner understands a spacecraft maneuver to be any movement of the spacecraft which would include coasting. In view of the specification and for the purpose of prior art examination, Examiner is interpreting a spacecraft maneuver to be defined as a change to the spacecraft’s velocity.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1,2, 4, 11, 12, 17, 19, 21, 22, 23, and 25 are rejected under 35 U.S.C. 102(a)(1) as anticipated by “A Hybrid Closed-Loop Guidance Strategy for Low-Thrust Spacecraft Enabled by Neural Networks” (hereinafter, “LaFarge”).
Regarding claim 1 LaFarge discloses a safety check method for checking a neural network output state onboard a spacecraft (see at least Fig. 7), the method comprising:
providing a neural network model to a disk storage of a spacecraft computer, wherein the spacecraft computer is configured to execute the neural network model onboard the spacecraft for navigating the spacecraft (see at least [Page 10]; “Guidance, Navigation, and control (GN&C) comprise three main components for autonomous flight systems. In this paradigm, guidance is tasked with planning with a suitable trajectory that satisfies mission criteria”);
calculating, via the neural network model, a neural network output based on a current navigation state of the spacecraft, wherein the neural network output comprises a navigation control for the spacecraft (see at least Fig. 7 and [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” the thrusting magnitude and direction correspond to the neural network outputs);
propagating the current navigation state with the neural network output to a next target epoch to determine a next navigation state of the spacecraft (see at least [Page 10-11]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network…the final output of the neural network guidance step is a trajectory consisting of dynamical states, control variables, and integration times, resulting in n discrete arcs”);
evaluating whether the neural network output and the next navigation state are within predetermined bounds based on a training tube comprising position and velocity deviations around a nominal path (Examiner’s Note: Applicant defines a training tube as the following at paragraph [0033] of the specification; “the training tube refers to position and velocity errors around the nominal path.”) (see at least [Page 10]; “Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively. When deviation occurs, NN guidance terminates to avoid further deviation,” the reference trajectory corresponds to Applicant’s nominal path);
checking that parameters are within predetermined ranges, the parameters comprising one or more of an approach distance, a spacecraft attitude, a velocity deviation, and an amount of propellant used (see at least [Page 10]; “Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network. This iterative process continues until one of several termination criteria are met. First, the algorithm terminates if the suggested thrust (fi) or the combined effect of averaging the suggested control segments (favg) results in a nearly coasting arc, i.e., fi < fmin or favg < fmin, where fmin is visualized in Figure 3(b),” it would be obvious to one of ordinary skill in the art that the thrust magnitude over time would yield a velocity deviation based on those values, therefore the thrust amount and magnitude are an obvious variant to the velocity deviation, additionally [Page 10]; “Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively”);
incrementing to a next scheduled function when the neural network output and the next navigation state are within predetermined bounds and the parameters are within the predetermined ranges (see at least [Page 10]; “Integrating forward delta t in time yields a new state as an input to the network. This iterative process continues until one of several termination criteria are met…NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity…when deviation occurs, NN guidance terminates to avoid further deviation,” the iteration corresponds to incrementing to a next scheduled function this only occurs if a termination condition has not been met. The termination condition corresponds to the navigation state not being within the predetermined bounds); and
taking a corrective action when the neural network output and the next navigation state are determined to be outside the predetermined bounds or one of the parameters is outside a respective one of the predetermined ranges (see at least [Page 10]; “Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively. When deviation occurs, NN guidance terminates to avoid further deviation,” the termination of the neural network guidance corresponds to the corrective action and it occurs when there is deviation from a defined range (parameters are outside of predetermined bounds/ranges)).
Regarding claim 2 LaFarge discloses all of the limitations of claim 1. Additionally, LaFarge, in the same field of endeavor, teaches when the navigation control comprises instructions for the spacecraft to coast when no spacecraft maneuver is needed (see at least [Page 13]; “In the hybrid guidance model, if the thrust combination process yields a small average magnitude, i.e., favg < fmin, the system assumes that coasting is appropriate,” if the thrust is less than a threshold it is determined that the thrust is not needed and the spacecraft is determined to coast).
Regarding claim 4 LaFarge discloses all of the limitations of claim 1. Additionally, LaFarge discloses when the neural network output and the next navigation state are determined to be outside the predetermined bounds or one of the parameters is outside a respective one of the predetermined ranges, the method does not increment to a next scheduled function (see at least [Page 10]; “Integrating forward delta t in time yields a new state as an input to the network. This iterative process continues until one of several termination criteria are met…NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position ort velocity…when deviation occurs, NN guidance terminates to avoid further deviation,” the iteration corresponds to incrementing to a next scheduled function this only occurs if a termination condition has not been met. The termination condition corresponds to the navigation state not being within the predetermined bounds).
Regarding claim 5 LaFarge renders obvious all of the limitations of claim 1. Additionally, LaFarge discloses comprising commanding the spacecraft to coast while taking the corrective action (see at least [Page 10]; “First, the algorithm terminates if the suggested thrust (fi) or the combined effect of averaging the suggested control segments (favg) results in a nearly coasting arc, i.e., fi < fmin or favg < fmin, where fmin is visualized in Figure 3(b),” the termination corresponds to the corrective action, and [Page 13]; “if the thrust combination process yields a small average magnitude, i.e., favg < fmin, the system assumes that coasting is appropriate,” if the thrust is outside the predetermined range, this corresponds to the parameters not being within the bounds as thrust is an obvious variant of the velocity, the spacecraft is commanded to coast while the algorithm is terminated).
Regarding claim 11 LaFarge discloses a method for navigating a spacecraft with safety checking of a maneuver design output state (see at least Fig. 7), the method comprising:
storing software applications comprising an autonomous control executive, a navigation app, a maneuver design app, and a safety check app to a disk storage of a spacecraft computer, wherein the autonomous control executive is configured to execute the navigation app, the maneuver design app, and the safety check app onboard the spacecraft (see at least Fig. 6, [Page 2]; “Neural Networks (NNs) are of particular recent interest for potential onboard activities. A neural network is a class of nonlinear statistical models that are frequently employed in machine learning classification and regression tasks.10 The capability to implement a neural network on a flight computer is under active development, and new hardware technologies will render machine learning approaches more accessible and productive for onboard use.4,” and [Page 10]; “Guidance, Navigation, and control (GN&C) comprise three main components for autonomous flight systems. In this paradigm, guidance is tasked with planning with a suitable trajectory that satisfies mission criteria,” it would be obvious to one of ordinary skill in the art that the features of the guidance control flight system would be implementing using software applications);
scheduling operations for controlling a timing of functions of a space mission via the autonomous control executive, comprising scheduling the navigation app to provide a plurality of navigation state estimates, scheduling the maneuver design app to provide one or more spacecraft maneuver design, and scheduling the safety check app to check the one or more spacecraft maneuver designs (see at least [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” the flight computer is scheduled to compute new states at each iterated delta t duration, at each delta t duration a new state is iterated, checked for safety, and output);
receiving navigation state inputs via the navigation app and outputting an updated navigation state estimate based upon the navigation state inputs (see at least Fig. 7 and [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” the new state yielded corresponds to the updated navigation state estimate);
receiving, via the maneuver design app, the updated navigation state estimate and outputting a maneuver design output state (see at least fig. 7; after the neural network determines the values, the values are output to a safety check in the following step);
performing a safety check via the safety check app, wherein the safety check app receives the maneuver design output state and determines whether the maneuver design output state is within predetermined bounds (see at least [Page 10]; “Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively. When deviation occurs, NN guidance terminates to avoid further deviation,” the reference trajectory corresponds to Applicant’s nominal path);
when the maneuver design output state is within the predetermined bounds, commanding the spacecraft to perform a spacecraft maneuver (see at least fig. 6 and 7; if when the magnitude is checked it is determined to be within the bounds it passes the trajectory to pre-processing or skips pre-processing and directly outputs the correction, [Page 12]; “omit pre-processing and pass the neural network-generated path directly to the corrections algorithm” the corrections algorithm provides trajectory to control spacecraft as shown in fig. 6); and
when the maneuver design output state is determined to be outside the predetermined bounds, commanding the spacecraft to perform a safety maneuver (see at least [Page 10]; “Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively. When deviation occurs, NN guidance terminates to avoid further deviation,” the termination of the neural network guidance corresponds to the safety maneuver and it occurs when there is deviation above a threshold (parameters are outside of predetermined bounds/ranges); and
commanding the spacecraft to coast until a next function of the space mission…via the autonomous control executive (see at least [Page 13]; “In the hybrid guidance model, if the thrust combination process yields a small average magnitude, i.e., favg < fmin, the system assumes that coasting is appropriate,” if the spacecraft is outside the predetermined bounds for the thrust a coast is performed).
Regarding claim 12 LaFarge discloses all of the limitations of claim 11. Additionally, LaFarge discloses wherein the maneuver design output state comprises a plurality of neural network outputs to be executed as a series of steps prior to a next navigation state (see at least Fig. 7 and [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” each iteration of thrust segments, yields a new neural network output).
Regarding claim 17 LaFarge discloses a control architecture configured for performing a safety check of a maneuver design for navigation of a spacecraft (see at least Fig. 6 and 7), the control architecture comprising:
an autonomous control executive, a navigation app, a maneuver design app, and a safety check app each comprising a software application stored in a disk storage of a vehicle computer, wherein the spacecraft computer is configured to execute the autonomous control executive, the navigation app, the maneuver design app, and the safety check app onboard the spacecraft (see at least Fig. 6, [Page 2]; “Neural Networks (NNs) are of particular recent interest for potential onboard activities. A neural network is a class of nonlinear statistical models that are frequently employed in machine learning classification and regression tasks.10 The capability to implement a neural network on a flight computer is under active development, and new hardware technologies will render machine learning approaches more accessible and productive for onboard use.4,” and [Page 10]; “Guidance, Navigation, and control (GN&C) comprise three main components for autonomous flight systems. In this paradigm, guidance is tasked with planning with a suitable trajectory that satisfies mission criteria,” it would be obvious to one of ordinary skill in the art that the features of the guidance control flight system would be implementing using software applications);
wherein the autonomous control executive provides dedicated operation scheduling for the navigation app, the maneuver design app, and the safety check app for controlling a timing of functions at predetermined times for a duration of an entire space mission (see at least [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” the flight computer is scheduled to compute new states at each iterated delta t duration, at each delta t duration a new state is iterated, checked for safety, and output);
wherein the navigation app is configured to determine a navigation update based on a navigation state estimate of the spacecraft at each of the predetermined times (see at least Fig. 7 and [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” the new state yielded corresponds to the updated navigation state estimate)
wherein the maneuver design app is configured to determine a maneuver design based on the navigation update at each of the predetermined times (see at least [Col. 12, lines 15-20]; “If the intended driving action is approved by the safety control module 47, the safety control module is configured to provide the output control signal to the processing module 46 for determining an updated predicted near future path with the approved output control signal as the input control signal to the deep neural network”);
wherein the safety check app is configured to perform a safety check that determines whether the maneuver design is within predetermined bounds at each of the predetermined times (see at least [Page 10]; “Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively. When deviation occurs, NN guidance terminates to avoid further deviation,” the reference trajectory corresponds to Applicant’s nominal path); and
wherein when the maneuver design is determined to be within the predetermined bounds a spacecraft maneuver is executed (see at least fig. 6 and 7; if when the magnitude is checked it is determined to be within the bounds it passes the trajectory to pre-processing or skips pre-processing and directly outputs the correction, [Page 12]; “omit pre-processing and pass the neural network-generated path directly to the corrections algorithm” the corrections algorithm provides trajectory to control spacecraft as shown in fig. 6), and
when the maneuver design is determined to be outside the predetermined bounds, the spacecraft coasts while corrective action is taken by the autonomous control executive (see at least [Page 10]; “First, the algorithm terminates if the suggested thrust (fi) or the combined effect of averaging the suggested control segments (favg) results in a nearly coasting arc, i.e., fi < fmin or favg < fmin, where fmin is visualized in Figure 3(b),”and [Page 13]; “In the hybrid guidance model, if the thrust combination process yields a small average magnitude, i.e., favg < fmin, the system assumes that coasting is appropriate,” if the thrust is less than a threshold it is determined the algorithm is terminated, which corresponds to the corrective action, and the spacecraft is caused to coast).
Regarding claim 19 LaFarge discloses all of the limitations of claim 17. Additionally, LaFarge discloses wherein the navigation app comprises a neural network model, and the navigation app is configured to determine a neural network model output based on a current navigation state of the spacecraft (see at least [Page 10-11]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network…the final output of the neural network guidance step is a trajectory consisting of dynamical states, control variables, and integration times, resulting in n discrete arcs”).
Regarding claim 21 LaFarge discloses all of the limitations of claim 1. Additionally, Lafarge discloses wherein the neural network model is configured for performing a minimum-fuel transfer, a GEO station keeping orbit, an Earth-Moon halo orbit station keeping, a trajectory correction maneuver for a chemical propulsion interplanetary mission, a many-revolution spiral transfer, or a LEO station keeping orbit (see at least [Page 2]; “The proposed hybrid guidance approach, using neural network and targeting techniques, is evaluated in the Earth-Moon neighborhood. In this area, the three-body problem serves as a suitable dynamical model that is representative of observed natural motion in cislunar space. In particular, the planar Circular Restricted Three-Body Problem (CR3BP) is a useful environment for preliminary evaluation because it both represents a challenging region of space that is relevant to upcoming missions while sufficiently low-fidelity for initial analysis of the hybrid guidance scheme,” and [Page 3]; “Reinforcement learning techniques are of particular recent interest in spaceflight problems. Reinforcement learning-enabled onboard guidance is broadly categorized based on the phase of flight. Notably productive areas of reinforcement learning research are landing problems and small body operations. Other investigations include rendezvous exo-atmospheric interception, station keeping and detection avoidance”).
Regarding claim 22 LaFarge discloses all of the limitations of claim 1. Additionally, Lafarge discloses wherein the neural network model is configured to provide electric propulsion control in 3-Body orbits (see at least [Page 2]; “The proposed hybrid guidance approach, using neural network and targeting techniques, is evaluated in the Earth-Moon neighborhood. In this area, the three-body problem serves as a suitable dynamical model that is representative of observed natural motion in cislunar space. In particular, the planar Circular Restricted Three-Body Problem (CR3BP) is a useful environment for preliminary evaluation because it both represents a challenging region of space that is relevant to upcoming missions while sufficiently low-fidelity for initial analysis of the hybrid guidance scheme) and the neural network model is trained on a smoothed-minimum-fuel problem (see at least [Page 10]; “In both pre-flight and ground-based analyses, where computational resources are abundant, path-planning is often formulated as an optimization problem to minimize propellant expenditure”).
Regarding claim 23 LaFarge discloses all of the limitations of claim 1. Additionally, LaFarge discloses wherein a plurality of spacecraft maneuvers are scheduled to be executed as a series of steps within the next target epoch such that the neural network output comprises a plurality of navigation controls to be executed in series prior to evaluating whether the next navigation state is within the predetermined bounds (see at least [Page 10-11]; “The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction (^u1). Integrating forward delta t in time yields a new state as an input to the network. This iterative process continues until one of several termination criteria are met. First, the algorithm terminates if the suggested thrust (fi) or the combined effect of averaging the suggested control segments (favg) results in a nearly coasting arc, i.e., fi < fmin or favg < fmin, where fmin is visualized in Figure 3(b)… Alternatively, NN guidance failure occurs when the relative deviation from the reference trajectory exceeds pre-defined thresholds in position or velocity: 8000 km and 35 m/s, respectively. When deviation occurs, NN guidance terminates to avoid further deviation The final output of the neural network guidance step is a trajectory consisting of dynamical states, control variables, and integration times, resulting in n discrete arcs,” each arc corresponds to the thrust segments which correspond to the plurality of spacecraft maneuvers, the thrust segments are iterated and then its determined whether the new state is within bounds).
Regarding claim 25 LaFarge discloses all of the limitations of claim 1. Additionally, LaFarge discloses wherein checkpoints are employed prior to parts of a trajectory such that the neural network model maintains the spacecraft within the training tube (see at least [Page 10]; “The NN-driven perturbation recovery process is summarized in Figure 7. The neural network is trained to suggest thrust segments with delta t durations. Beginning with an initial state estimate, assumed to be computed by a navigation system, the neural network determines a thrusting segment with magnitude (f1) and direction. Integrating forward delta t in time yields a new state as an input to the network,” the delta t segments correspond to checkpoints as the state is estimated to determine whether it is within the bounds every t seconds).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 8, 9, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over LaFarge, as applied to claim 1 and 11 above, in view of US-20200017114 (hereinafter, “Santoni”).
Regarding claim 8 LaFarge discloses all of the limitations of claim 1. LaFarge does not disclose wherein taking a corrective action comprises performing a human-in-the-loop operation in which one or more commands are sent to the spacecraft from a ground control station.
Santoni, in the same field of endeavor, teaches wherein taking a corrective action comprises performing a human-in-the-loop operation in which one or more commands are sent to the vehicle from a ground control station (see at least [0061]; “In some implementations, an example failover control system (e.g., 750) may be provided at the safety companion system 710 implementing logic executed by safety control subsystem 710 processing hardware to reliably implement failover safety actions such as initiate an automated pullover, automated braking, handover to a human user (e.g., within the vehicle or at a remote vehicle control service center)”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the safety check of LaFarge with the failure intervention of Santoni. One of ordinary skill in the art would have been motivated to make this modification for the benefit of avoiding unreasonable risk when errors in the system are detected (see at least Santoni; [0061]).
Regarding claim 9 LaFarge and Santoni renders obvious all of the limitations of claim 8. Additionally, LaFarge comprising commanding the spacecraft to coast while waiting for corrective action (see at least [Page 10]; “First, the algorithm terminates if the suggested thrust (fi) or the combined effect of averaging the suggested control segments (favg) results in a nearly coasting arc, i.e., fi < fmin or favg < fmin, where fmin is visualized in Figure 3(b),” the termination corresponds to the corrective action, and [Page 13]; “if the thrust combination process yields a small average magnitude, i.e., favg < fmin, the system assumes that coasting is appropriate,” if the thrust is outside the predetermined range, this corresponds to the parameters not being within the bounds as thrust is an obvious variant of the velocity, the spacecraft is commanded to coast while a correction is performed).
Lafarge does not teach the corrective action being the one or more commands from the ground control station (see at least [0061]; “In some implementations, an example failover control system (e.g., 750) may be provided at the safety companion system 710 implementing logic executed by safety control subsystem 710 processing hardware to reliably implement failover safety actions such as initiate an automated pullover, automated braking, handover to a human user (e.g., within the vehicle or at a remote vehicle control service center)”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the safety check of LaFarge with the failure intervention of Santoni. One of ordinary skill in the art would have been motivated to make this modification for the benefit of avoiding unreasonable risk when errors in the system are detected (see at least Santoni; [0061]).
Regarding claim 15 LaFarge discloses all of the limitations of claim 11. LaFarge does not disclose comprising reverting to a simpler maneuver design being more robust and less accurate than an original maneuver design when the maneuver design output state is determined to be outside the predetermined bounds, then repeating the step of performing the safety check.
Santoni, in the same field of endeavor, teaches comprising reverting to a simpler maneuver design being more robust and less accurate than an original maneuver design when the maneuver design output state is determined to be outside the predetermined bounds, then repeating the step of performing the safety check (see at least [0061]; “In some implementations (e.g., as illustrated in FIG. 8), the safety companion subsystem 710 may implement a fail over control to perform a degraded level of driving automation functionality. In other cases, failover automated driving logic may be provided additionally or alternatively by a subsystem separate from the safety companion subsystem 710. Generally, when safety monitor application 810 detects certain critical or repeated failures, the safety monitor application 810 can invoke failover driving control functionality to temporarily avoid unreasonable risk until a separate, more feature-rich failover system engages. Indeed, in some implementations, the safety monitor application 810 may determine whether a robust failover automated driving system is present on the vehicle and utilize such a system as a primary failover safety mechanism,” reverting to a failover automated driving system would include reverting to a different neural network because automated driving systems include at least one machine learning model).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the safety check of LaFarge with the failure intervention of Santoni. One of ordinary skill in the art would have been motivated to make this modification for the benefit of avoiding unreasonable risk when errors in the system are detected (see at least Santoni; [0061]).
Regarding claim 16 LaFarge discloses all of the limitations of claim 11. LaFarge does not disclose comprising sending an error message to a ground control station and waiting for a reply message from the ground control station when the maneuver design output state is determined to be outside the predetermined bounds.
Santoni, in the same field of endeavor, teaches comprising sending an error message to a ground control station (see at least [0058]; “In other implementations, events and errors may be additionally or alternatively detected by hardware monitoring tools at the compute subsystem 705 (e.g., by hardware agents (e.g., MCU agent 815) or other tools on the compute subsystem 705), and these results may be reported in data provided to the safety monitoring application 810 (e.g., through safety proxy 825 and application monitor framework 830) from the compute subsystem 705 for processing,” an error is alerted to the safety monitoring application, the remote service center would be included as a safety monitoring application to alert the service center to provide a human intervention)and waiting for a reply message from the ground control station (see at least [0061]; “In some implementations, an example failover control system (e.g., 750) may be provided at the safety companion system 710 implementing logic executed by safety control subsystem 710 processing hardware to reliably implement failover safety actions such as initiate an automated pullover, automated braking, handover to a human user (e.g., within the vehicle or at a remote vehicle control service center)”) when the maneuver design output state is determined to be outside the predetermined bounds (see at least [0058]; “In other implementations, events and errors may be additionally or alternatively detected by hardware monitoring tools at the compute subsystem 705 (e.g., by hardware agents (e.g., MCU agent 815) or other tools on the compute subsystem 705), and these results may be reported in data provided to the safety monitoring application 810,” the error corresponds to the state being outside predetermined bounds as Applicant does not define bounds in claim 11).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the safety check of LaFarge with the failure intervention of Santoni. One of ordinary skill in the art would have been motivated to make this modification for the benefit of avoiding unreasonable risk when errors in the system are detected (see at least Santoni; [0061]).
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over LaFarge, as applied to claim 11, in further view of US-20200062426 (hereinafter, “Drexler”).
Regarding claim 14 LaFarge discloses all of the limitations of claim 11. LaFarge does not disclose wherein commanding the spacecraft to perform a safety maneuver comprises raising an altitude of the spacecraft.
Drexler, in the same field of endeavor, teaches wherein commanding the vehicle to perform a safety maneuver comprises raising an altitude of the vehicle (see at least [0004]; “Collision avoidance maneuvers involve altering the orbital trajectory of the satellite in some fashion, e.g. increasing (or decreasing) velocity and altitude.”).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the safety check of LaFarge with the vertical safety maneuver of Drexler. One of ordinary skill in the art would have been motivated to make this modification for the benefit of being able to alter a trajectory avoid a potential collision (see at least Drexler; [Abstract]).
Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Lafarge, as applied to claim 17 above, in view of US-10049505 (hereinafter, “Harvey”).
Regarding claim 18 Lafarge discloses all of the limitations of claim 17. Additionally, Lafarge discloses wherein the dedicated operation scheduling provided by the autonomous control executive (see at least [Col. 9, lines 3-8]; “FIG. 2 depicts a schematic view of an exemplary communications system 200 for maintaining self-driving vehicle 100 shown in FIG. 1. System 200 includes self-driving 5 vehicle controller 110 (shown in FIG. 1) configured to control self-driving vehicle 100, and schedule maintenance”) comprises determining a frequency of the navigation updates (see at least [col. 8, lines 39-44]; “In some embodiments, self-driving vehicle controller 110 may update maps based upon sensory input, allowing self-driving vehicle controller 110 to keep track of self-driving vehicle's 100 position, even when conditions change or when self-driving vehicle 100 enters uncharted environments.”), determining a frequency of the maneuver designs (see at least [col. 8, lines 45-49]; “Additionally, self-driving vehicle controller 110 may control the direction and speed of self-driving vehicle 100. Self-driving vehicle controller 110 may allow self-driving vehicle 100 to travel from point A to point B without input from a human operator”), and determining a frequency of the safety checks (see at least [Col. 18, lines 14-25]; “The indication that the autonomous vehicle technology or 15 functionality is in need of maintenance may be generated from vehicle mounted electronics performing self-diagnostics. Additionally or alternatively, the indication that the autonomous vehicle technology or functionality is in need of maintenance may be generated from a remote server (such as an insurance provider or vehicle manufacturer remote server) and/or may be based upon a re-call or upgrade to an autonomous vehicle technology or functionality (such as a sensor or software upgrade or re-call), or a warranty expiration notice associate,” the vehicle controller can schedule when diagnostic checks which can equate to safety checks should be run, this is generally based on insurance policy, recalls, etc.).
Therefore, it would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the safety check of Lafarge with the maintenance schedule of Harvey. One of ordinary skill in the art would have been motivated to make this modification for the benefit of ensuring operational safety of the vehicle (see at least Harvey; [Col. 1, lines 23-50]).
Allowable Subject Matter
Claims 7 and 24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The claim limitations of the corrective action comprising reverting to a smaller neural network model which is more robust and less accurate than an original neural network model, then repeating the steps until the neural network output is determined to be within the predetermined bounds and ranges, of dependent claim 7 in combination with the limitations of independent claim 1 renders the claim novel and non-obvious over the prior art of record.
The closest prior arts of record are “A Hybrid Closed-Loop Guidance Strategy for Low-Thrust Spacecraft Enabled by Neural Networks” (hereinafter, “LaFarge”) and US-20200017114 (hereinafter, “Santoni”), which disclose many of the required limitations. Lafarge discloses all of the limitations of claim 1 as set forth in the 35 U.S.C. 102 rejection above. LaFarge discloses a system for onboard autonomy of a spacecraft, which demonstrates a neural networks ability to provide iterative guidance techniques that create a robust hybrid approach to onboard closed-loop guidance. LaFarge does disclose detecting whether the neural network is out of bounds/ranges but it merely terminates the algorithm in response to this detection. LaFarge does not mention reverting the neural network to a smaller neural network in order to adjust the neural network outputs to being within the predetermined ranges or bounds. Additionally, it would not have been obvious to modify the process of LaFarge to revert the neural network to a smaller neural network.
Santoni, which is of the same field of endeavor, teaches a system for navigation of the vehicle. This is considered to be of the same field of endeavor, as this relates to vehicle control. Santoni teaches wherein the autonomous driving system implements a fail over control to a perform a degraded level of driving automation functionality. Santoni does not mention the use of a neural network to perform this degraded level of functionality nor the original level of functionality. While it would be obvious to provide a fail over system for vehicle control, it would not be obvious to provide another neural network which is smaller and more robust to perform the neural network output until the outputs are determined to be within the bounds/ranges. Therefore, examiner asserts it would not have been obvious to combine LaFarge and Santoni in such a way as to yield the invention as presented in claim 7. Additionally, since claim 24 is dependent on claim 7 it is also considered to contain allowable subject matter.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US-20230286673 teaches a model for predictive control for a spacecraft formation, the method maneuvers the given space craft within a polytype boundary using model predictive control (MPC) to minimize fuel consumption.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHLEIGH NICOLE TURNBAUGH whose telephone number is (703)756-1982. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hitesh Patel can be reached at (571) 270-5442. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASHLEIGH NICOLE TURNBAUGH/Examiner, Art Unit 3667
/Hitesh Patel/ Supervisory Patent Examiner, Art Unit 3667
3/2/26