DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Responsive to the communication dated 11/11/2025
Claims 1, 3, 5-6, 8-12, 14, 16, and 18-24 are presented for examination.
Finality
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Response to Arguments - 101
Applicant's arguments filed 11/11/2025 have been fully considered but they are not persuasive.
Applicant argues that no element of the claims can be practically performed in the human mind.
Examiner responds by explaining that several features can in fact be practically performed in the human mind. For example,
analyze High Accuracy Satellite Drag Model (HASDM) data associated with a HASDM model, the HASDM data including at least one or more solar drivers, one or more geomagnetic drivers, or one or more density maps and corresponding to two solar cycles;
Analyzing data at a high level of generality is a mental process that involves evaluating a set of data and making certain judgments about it. For example, business owners have looked at transaction logs to determine buying trends and their most popular products since the invention of written language. An analysis step recited at such a high level could be something as simple as observing a set of data and concluding that “there is a lot of data.” Limiting the data observed to only data that falls within a certain time frame does not change the capability of analyzing said data mentally, nor does specifying particular types of data that are included in the dataset.
identify an object to move to avoid a collision based at least in part on an analysis of the reduced order mass density map.
This kind of identification is a mental process equivalent to an observation and a judgement. For example, based on observation of data indicating that a first satellite or piece of space debris is accelerating towards a second satellite, a person could reasonably judge/conclude that the satellite should be moved to alter its trajectory to not come in contact with the debris/other satellite. Such identification “based on” an analysis of the reduced order density map could be as simple as observing the map, determining where the density is the highest based on this observation, and arbitrarily choosing a satellite in that high density zone that the person thinks should be moved somewhere with lower density.
Further, as to the applicant’s arguments about the training step, it should be noted that this was never rejected as being a mental process, rather it was rejected as being a mathematic process.
Nothing in the disclosure of example 39 described the training in a way that equated it to the math process. In contrast, the specification of the current application very clearly defines the training as being a mathematic process, see [Par 34] of the specification: “In some embodiments, Gaussian process regression (GPR) may be used because of its accuracy and robustness as a supervised machine learning technique. It is a nonparametric approach (e.g., does not take a functional form such as a polynomial) that calculates the probability distribution over all admissible functions that fit the data rather than calculating the probability distribution of parameters of a specific function. The output is assumed to have a multivariate Gaussian distribution, where the characteristics of the Gaussian model is dictated by the functional form of the covariance matrix or kernel. The training phase optimizes the free parameters of the covariance kernel such that the multivariate Gaussian best describes the distribution of the observed data points. GPR characterizes the response of a system or variable to changes in input conditions and can be used to predict the variable at a new set of input conditions using the posterior conditional probability.”)
In view of MPEP 2111.01(I and III), see: Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms. Phillips v. AWH Corp., 415 F.3d 1303, 1315, 75 USPQ2d 1321, 1327 (Fed. Cir. 2005) (en banc) ("[T]he specification ‘is always highly relevant to the claim construction analysis. Usually, it is dispositive; it is the single best guide to the meaning of a disputed term.’" (quoting Vitronics Corp. v. Conceptronic Inc., 90 F.3d 1576, 1582 (Fed. Cir. 1996)).
Applicant argues that the claims provide an improvement in satellite traffic management, particularly due to the training and identification of objects to move, and therefore integrate the claims into a practical application.
Examiner responds by explaining that firstly, the training steps are identified as being a mathematic process. Being part of the abstract idea itself, it cannot be the basis for an improvement to technology, integration into a practical application, nor provide significantly more. (MPEP 2106.05(a)(I): An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016))
Further, even if, for the sake of argument, the training steps were found not to be a mathematic process, they also amount to no more than mere instructions to apply. The limitations are claimed in a very general way that merely claims the idea of a solution, rather than explaining the actual process utilized. Particularly, the claims describe “training” a model for density prediction that includes “uncertainty quantification” without claiming how such training is actually accomplished and how output features such as uncertainty quantification are taken into account by the training. See (MPEP 2106.06(a) “An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP § 2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration.” And (MPEP 2106.05(f)(1) “Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743. By way of example, in Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017), the steps in the claims described "the creation of a dynamic document based upon ‘management record types’ and ‘primary record types.’" 850 F.3d at 1339-40; 121 USPQ2d at 1945-46. The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data." 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it". 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims "so result focused, so functional, as to effectively cover any solution to an identified problem")).”)
Applying a computer to train a machine learning model at a high level of generality and then using that model is simply the act of instructing a computer to perform generic functions to perform that training and subsequent use of the model, which is merely an instruction to apply a computer to the judicial exception. The claim only recites the idea of a solution or outcome, i.e. that the model is “trained” without reciting how this training is actually accomplished. Further, the computer elements claimed are cited as merely generic tools to perform the operations; for additional clarity see ([Par 77] “These numbers come from the number of HASDM prediction epochs previously discussed and the number of MC runs (1,000). HASDM-ML can perform these predictions in 17.27 seconds for CHAMP and 17.54 seconds for GRACE on a laptop with a NVIDIA GeForce GTX 1070 Mobile graphics card. Using CPU, the model takes 143 seconds for CHAMP and 152 seconds for GRACE. FIG. 12 shows HASDM and HASDM-ML orbit-averaged densities during four geomagnetic storms with confidence bounds and the associated calibration curves.” [Par 28] “The system of the present disclosure is trained on multiple gigabytes of data (e.g., two solar cycles) captured by the Space Environment Technologies (SET) corporation from the US Air Force Space Command (AFSPC) JSpOC's High Accuracy Satellite Drag Model (HASDM) for scientific research covering the period from 2001-2020 (presently; continuously growing” [Par 34] “In some embodiments, Gaussian process regression (GPR) may be used because of its accuracy and robustness as a supervised machine learning technique. It is a nonparametric approach (e.g., does not take a functional form such as a polynomial) that calculates the probability distribution over all admissible functions that fit the data rather than calculating the probability distribution of parameters of a specific function. The output is assumed to have a multivariate Gaussian distribution, where the characteristics of the Gaussian model is dictated by the functional form of the covariance matrix or kernel”)
As to the identification of objects to move, this is a mental process and therefore cannot be cannot be the basis for an improvement to technology, integration into a practical application, nor provide significantly more. (MPEP 2106.05(a)(I): An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016))
Further note that no actual movement or control of a satellite is required by the claims, merely that a satellite that should be moved is identified.
Applicant argues that the use of reduced order data integrates the claims into a practical application.
Examiner responds by explaining that the reduction of data using techniques such as proper orthogonal decomposition or principal component analysis amounts to no more than applying a mathematic algorithm to that data set and therefore cannot be cannot be the basis for an improvement to technology, integration into a practical application, nor provide significantly more. (MPEP 2106.05(a)(I): An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016))
Even in view of recent developments such as Ex parte Desjardens, the use of this dimensionally reduced data is not sufficient to integrate the claims into a practical application as an improvement to AI is not at all what the claims are directed to; rather the claims are directed to a method of satellite data analysis and movement prediction; the use of machine learning is merely tangential to the abstract nature of the claims. The analysis of the data being allegedly “faster, easier, and less complex” as argued is not a result of some novel or significant improvement to the functioning of machine learning, it is the result of using math to reduce the scale of the data, meaning any further operation with that data would be “faster, easier, and less complex” because less data needs to be considered.
Further, it should be noted that the use of the machine learning model to output reduced order output is merely a side effect of supplying the machine learning model with reduced order input; if there is an improvement here, it is not due to the operation of the machine learning in a new and interesting way, it is solely due to the mathematic reduction of the training data beforehand.
Applicant argues that the previous characterization of the identification of an object to move oversimplifies the analysis.
Examiner responds by explaining that while the previous characterization of the identification (This kind of identification is a mental process equivalent to an observation and a judgement. For example, based on observation of data indicating that a first satellite or piece of space debris is accelerating towards a second satellite, a person could reasonably judge/conclude that the satellite should be moved to alter its trajectory to not come in contact with the debris/other satellite. Such identification “based on” an analysis of the reduced order density map could be as simple as observing the map, determining where the density is the highest based on this observation, and arbitrarily choosing a satellite in that high density zone that the person thinks should be moved somewhere with lower density.) may be simpler than what the applicants had in mind, it nonetheless reads on the broadest reasonable interpretation of the claims. Argued features such as that “thousands of other space objects” must be considered in repositioning the satellite are not claimed. Similarly, while doing this mentally may or may not conform to certain argued but unclaimed safety or efficiency requirements, these arguments are also moot as such requirements are not claimed. Not examining claims in a way that overgeneralizes them does not mean the importation of unclaimed features and specifics where there are none.
Applicant argues that the claims provide an improvement to the functioning of a computer because they “facilitate easy exploitation and rapid access to information of the HASDM database.”
Examiner responds by explaining that this alleged improvement is not an improvement to the functioning of a computer, rather it is solely due to the mental and mathematic process of analyzing the data and performing mathematic operations on that data to make it simpler to parse. Any increase in speed or rapidity in access to this data is merely a result of A) there being less data to sort through and B) the natural speed advantage afforded by using a computer to sort through data compared to doing it mentally by hand; i.e. given a corpus of text if would be faster for a computer to find a certain phrase than for a person to painstakingly read each page until the desired phase is found, but this does not make such searching any less of a mental process nor provide an improvement to the functioning of a computer.
(MPEP 2106.05(a)(I): An inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself." Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016))
Applicant argues that the claims a “specific way of analyzing the HASDM data” and a specific way the machine learning model is trained, and therefore provide an improvement to a technical field.
Examiner responds by explaining that the analysis is not specific at all. In fact, the analysis is recited at such a high level that it could practically be performed in the human mind and, given broadest reasonable interpretation, could consist of just about anything from comprehensive tracking of data representative of the movement of a objects throughout the time series captured by the data and making predictions of future trajectories to merely looking at size of the data set and coming to the conclusion that “there is a lot of data.” Specifying that the data contains solar drivers, geomagnetic drivers, density maps, or that the timespan covered by the data set corresponds to two solar cycles does nothing more than clarify the form of the data that is generically analyzed. Being so highly generic, as claimed, the analysis in no way discloses a “specific way” of analyzing the HASDM data; the claims only reflect the idea of an outcome, i.e. that the data is analyzed without reciting any meaningful specifics and thus does not provide an improvement to a technical field and further is not analogous to McRo.
Further, as explained above, the particularities of how the machine learning model is trained are absent from the claims. The limitations are claimed in a very general way that merely claims the idea of a solution, rather than explaining the actual process utilized. Particularly, the claims describe “training” a model for density prediction that includes “uncertainty quantification” without claiming how such training is actually accomplished and how output features such as uncertainty quantification are taken into account by the training. See (MPEP 2106.06(a) “An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP § 2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration.” And (MPEP 2106.05(f)(1) “Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In contrast, claiming a particular solution to a problem or a particular way to achieve a desired outcome may integrate the judicial exception into a practical application or provide significantly more. See Electric Power, 830 F.3d at 1356, 119 USPQ2d at 1743. By way of example, in Intellectual Ventures I v. Capital One Fin. Corp., 850 F.3d 1332, 121 USPQ2d 1940 (Fed. Cir. 2017), the steps in the claims described "the creation of a dynamic document based upon ‘management record types’ and ‘primary record types.’" 850 F.3d at 1339-40; 121 USPQ2d at 1945-46. The claims were found to be directed to the abstract idea of "collecting, displaying, and manipulating data." 850 F.3d at 1340; 121 USPQ2d at 1946. In addition to the abstract idea, the claims also recited the additional element of modifying the underlying XML document in response to modifications made in the dynamic document. 850 F.3d at 1342; 121 USPQ2d at 1947-48. Although the claims purported to modify the underlying XML document in response to modifications made in the dynamic document, nothing in the claims indicated what specific steps were undertaken other than merely using the abstract idea in the context of XML documents. The court thus held the claims ineligible, because the additional limitations provided only a result-oriented solution and lacked details as to how the computer performed the modifications, which was equivalent to the words "apply it". 850 F.3d at 1341-42; 121 USPQ2d at 1947-48 (citing Electric Power Group., 830 F.3d at 1356, 1356, USPQ2d at 1743-44 (cautioning against claims "so result focused, so functional, as to effectively cover any solution to an identified problem")).”)
If details in the specification are relied upon to further define the details of this training, this training becomes a math process instead in view of ([Par 34] “In some embodiments, Gaussian process regression (GPR) may be used because of its accuracy and robustness as a supervised machine learning technique. It is a nonparametric approach (e.g., does not take a functional form such as a polynomial) that calculates the probability distribution over all admissible functions that fit the data rather than calculating the probability distribution of parameters of a specific function. The output is assumed to have a multivariate Gaussian distribution, where the characteristics of the Gaussian model is dictated by the functional form of the covariance matrix or kernel. The training phase optimizes the free parameters of the covariance kernel such that the multivariate Gaussian best describes the distribution of the observed data points. GPR characterizes the response of a system or variable to changes in input conditions and can be used to predict the variable at a new set of input conditions using the posterior conditional probability.”)
In this case, the training limitation is part of the abstract idea itself and therefore cannot integrate the claims into a practical application nor provide significantly more.
Applying a computer to train a machine learning model at a high level of generality and then using that model is simply the act of instructing a computer to perform generic functions to perform that training and subsequent use of the model, which is merely an instruction to apply a computer to the judicial exception. The claim only recites the idea of a solution or outcome, i.e. that the model is “trained” without reciting how this training is actually accomplished. Further, the computer elements claimed are cited as merely generic tools to perform the operations; for additional clarity see ([Par 77] “These numbers come from the number of HASDM prediction epochs previously discussed and the number of MC runs (1,000). HASDM-ML can perform these predictions in 17.27 seconds for CHAMP and 17.54 seconds for GRACE on a laptop with a NVIDIA GeForce GTX 1070 Mobile graphics card. Using CPU, the model takes 143 seconds for CHAMP and 152 seconds for GRACE. FIG. 12 shows HASDM and HASDM-ML orbit-averaged densities during four geomagnetic storms with confidence bounds and the associated calibration curves.” [Par 28] “The system of the present disclosure is trained on multiple gigabytes of data (e.g., two solar cycles) captured by the Space Environment Technologies (SET) corporation from the US Air Force Space Command (AFSPC) JSpOC's High Accuracy Satellite Drag Model (HASDM) for scientific research covering the period from 2001-2020 (presently; continuously growing” [Par 34] “In some embodiments, Gaussian process regression (GPR) may be used because of its accuracy and robustness as a supervised machine learning technique. It is a nonparametric approach (e.g., does not take a functional form such as a polynomial) that calculates the probability distribution over all admissible functions that fit the data rather than calculating the probability distribution of parameters of a specific function. The output is assumed to have a multivariate Gaussian distribution, where the characteristics of the Gaussian model is dictated by the functional form of the covariance matrix or kernel”)
To recount previous discussion on this topic, McRO has a very specific fact pattern that is not applicable to the current claims. McRo was found to not be eligible because it automated and enabled the computer processing of tasks that previously could not be processed by anything other than a human, and importantly claimed how the problem was solved in a very particular way (MPEP §2106.05(a)(II) “The basis for the McRO court's decision was that the claims were directed to an improvement in computer animation and thus did not recite a concept similar to previously identified abstract ideas. Id. The court relied on the specification's explanation of how the claimed rules enabled the automation of specific animation tasks that previously could not be automated. 837 F.3d at 1313, 120 USPQ2d at 1101…The McRO court also noted that the claims at issue described a specific way (use of particular rules to set morph weights and transitions through phonemes) to solve the problem of producing accurate and realistic lip synchronization and facial expressions in animated characters, rather than merely claiming the idea of a solution or outcome, and thus were not directed to an abstract idea. 837 F.3d at 1313, 120 USPQ2d at 1101.”)
The current claims, in contrast, particularly the analysis step, are recited at a very high level and the details of the analysis are not claimed. As the claims stand in their current form, the analysis step is a mental process that could be as simple as looking at the data obtained from the HASDM dataset and making basic judgements about that data, basic judgments that could be as simple as “there are a lot of data points.” Further, this kind of generic, high-level analysis of data is not something that was previously impossible to automate on a computer, and in fact has been a key functionality of computers since their inception. As such, the claims more akin to FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016) than McRO. Therefore, the claims do in fact preempt all possible approaches to analyzing HASDM data.
MPEP §2106.05(a): “For example, in McRO, the court relied on the specification’s explanation of how the particular rules recited in the claim enabled the automation of specific animation tasks that previously could only be performed subjectively by humans, when determining that the claims were directed to improvements in computer animation instead of an abstract idea. McRO, 837 F.3d at 1313-14, 120 USPQ2d at 1100-01.” And MPEP §2106.05(a)(II): “In McRO, the Federal Circuit held claimed methods of automatic lip synchronization and facial expression animation using computer-implemented rules to be patent eligible under 35 U.S.C. 101, because they were not directed to an abstract idea. McRO, 837 F.3d at 1316, 120 USPQ2d at 1103. The basis for the McRO court's decision was that the claims were directed to an improvement in computer animation and thus did not recite a concept similar to previously identified abstract ideas. Id. The court relied on the specification's explanation of how the claimed rules enabled the automation of specific animation tasks that previously could not be automated. 837 F.3d at 1313, 120 USPQ2d at 1101. The McRO court indicated that it was the incorporation of the particular claimed rules in computer animation that "improved [the] existing technological process", unlike cases such as Alice where a computer was merely used as a tool to perform an existing process. 837 F.3d at 1314, 120 USPQ2d at 1102. The McRO court also noted that the claims at issue described a specific way (use of particular rules to set morph weights and transitions through phonemes) to solve the problem of producing accurate and realistic lip synchronization and facial expressions in animated characters, rather than merely claiming the idea of a solution or outcome, and thus were not directed to an abstract idea. 837 F.3d at 1313, 120 USPQ2d at 1101.”
In contrast, MPEP § 2106.04(a)(2)(III)(C): “Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296.”
Response to Arguments - 103
Applicant's arguments filed 11/11/2025 have been fully considered but they are not persuasive.
Applicant argues that no prior art teaches analyzing HASDM data or determine input data based at least in part on the analysis of the HASDM data.
Examiner responds by explaining that Gondelach teaches analyze High Accuracy([Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density” [Page 6 Par 1] “From these TLE data, the state of an object (position and velocity) at any epoch can be extracted using the SGP4/SDP4 models (Hoots & Roehrich, 1980; Vallado et al., 2006). Hence, the effect of drag can be observed in TLE orbital data if the drag perturbation is strong enough.”) determine input data ([Page 5 Par 4] “The space weather inputs uk used in the dynamical model are taken from the inputs required by the original density models, see second column in Table 2. In addition to these default inputs, we added the next-hour values for key space weather indices to improve the DMDc prediction” [Page 3 Par 4] “ In our case, the full state space consists of the neutral mass density values on a dense uniform grid in latitude, local solar time, and altitude.”) based at least in part on the analysis([Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density” [Page 6 Par 1] “From these TLE data, the state of an object (position and velocity) at any epoch can be extracted using the SGP4/SDP4 models (Hoots & Roehrich, 1980; Vallado et al., 2006). Hence, the effect of drag can be observed in TLE orbital data if the drag perturbation is strong enough.” [Page 5 Par 4] “The space weather inputs uk used in the dynamical model are taken from the inputs required by the original density models, see second column in Table 2. In addition to these default inputs, we added the next-hour values for key space weather indices to improve the DMDc prediction” [Page 5 Par 1] “Furthermore, with respect to Mehta et al. (2018), we have improved the prediction performance of the linear model by including nonlinear space weather inputs”)
While HASDM makes obvious process High Accuracy Satellite Drag Model (HASDM) data associated with a HASDM model, ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”) analysis of the HASDM data; ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”)
In particular, Gondelach teaches the specific analysis steps while HASDM teaches the use of HASDM data. HASDM does not merely describe “how the HASDM data is developed” (although arguably that would still be enough to map, with Gondelach teaching the analysis and HASDM defining the structure and development of the data to be used;) instead, HASDM also explicitly discloses the analysis of the data generated by the model, ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”) with figures like 6-9 describing an analysis of the accuracy of the data generated by the model.
PNG
media_image1.png
173
416
media_image1.png
Greyscale
PNG
media_image2.png
649
622
media_image2.png
Greyscale
Applicant argues that no prior art teaches outputting the mean estimate of density and the uncertainty associated with the mean estimates.
Examiner responds by explaining that, firstly, the specific requirement for the “mean estimate” of the density is not claimed and therefore is moot.
Further, Gondelach clearly describes a model that outputs a reduced order density map with an uncertainty quantification ([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…” [Page 2 Par 2] “The technique combines the predictive abilities of physics-based models with the computational speed of empirical models by developing a Reduced-Order Model (ROM) that represents the original high-dimensional system using a smaller number of parameters. The order reduction is achieved using proper orthogonal decomposition (POD) (Golub & Reinsch, 1970; Rowley et al., 2004), also known as principal component analysis (PCA)” [Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density… The density estimation using TLE data is achieved by simultaneously estimating the orbits and BCs of several objects and the reduced-order density state using an unscented Kalman filter” [Page 3 Par 1] “Accurate thermospheric density estimates are computed by assimilating TLE data in reduced-order density models.” [Page 3 Par 6] “First, to make the problem tractable, the state space dimension is reduced using POD” [Figure 7] Shows maps of mass density from reduced order data output by the model [Page 14 Par 1] “ROM models mimic their base model can also be seen in Figure 7, which shows maps of the modeled and estimated density on 8 August 2002 at 450-km altitude. For example, the simple density distribution in the JB2008 model is also visible in the ROM-JB2008 density, whereas the NRLMSISE-00- and TIEGCM-based ROM models show more complex density distributions. On the other hand, independent of the base model, the ROM-estimated densities have a similar magnitude, which indicates successful calibration.” [Page 14 Par 2] “Figures 8 and 9 show the uncertainty in the estimated density for different altitudes, latitudes, and local solar time. The uncertainty in the estimated density is smaller for lower altitude and inside the diurnal bulge. This indicates that the density estimation is more accurate when the drag signal is stronger”)
PNG
media_image3.png
471
893
media_image3.png
Greyscale
The “overview of the process” in Gondelach cited by the arguments as proof that the model of Gondelach does not have uncertainty capabilities is not an ordered list of the steps that make up the model, it is merely the listing of the sections of the academic paper itself. Further, the Kalman filter is not a separate system from the model itself. See the algorithm describing the ROM density estimation.
PNG
media_image4.png
387
967
media_image4.png
Greyscale
Further, the use of both simulated and real TLE data is used to present the results of the models for the results section of the paper; the simulated TLE data is essentially for use as test cases. The emphasis in the previous action appears to have been a typo; the correct emphasis should have been: ([Page 3 Par 5] “the performance of the ROM density estimation is assessed using simulated and real TLE data, and the uncertainty quantification and prediction capability of the model are demonstrated.”)
PNG
media_image5.png
549
905
media_image5.png
Greyscale
Applicant argues that no prior art teaches train a density prediction model using machine learning by modeling identification through a correlation of the input data to the output data with reduced dimensionality, the density prediction model being trained to output a reduced order mass density map with an uncertainty quantification associated with the reduced order mass density map for accurately predicting trajectories of satellites;
Examiner responds by explaining that this is taught by the combination of the previously cited references; particularly:
Gondelach teaches ([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…” [Page 10 Par 1] “The performance of the dynamic ROM models is tested by comparing density forecasts with training data. Using the three different ROM density models the density was predicted for 5 days during quiet space weather conditions and during a geomagnetic storm in 2002. The resulting density forecast errors (the root-mean-square (RMS) percentage error on the three-dimensional spatial grid) and space weather conditions are shown in Figure 2. The predictions using the ROM model based on JB2008 are most accurate. This good performance can be explained by the superior space weather proxies used by the ROM-JB2008 model.”) ([Page 2 Par 2] “The technique combines the predictive abilities of physics-based models with the computational speed of empirical models by developing a Reduced-Order Model (ROM) that represents the original high-dimensional system using a smaller number of parameters. The order reduction is achieved using proper orthogonal decomposition (POD) (Golub & Reinsch, 1970; Rowley et al., 2004), also known as principal component analysis (PCA)” [Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density… The density estimation using TLE data is achieved by simultaneously estimating the orbits and BCs of several objects and the reduced-order density state using an unscented Kalman filter” [Page 3 Par 1] “Accurate thermospheric density estimates are computed by assimilating TLE data in reduced-order density models.” [Page 3 Par 6] “First, to make the problem tractable, the state space dimension is reduced using POD”) the density prediction model ([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…” [Page 10 Par 1] “The performance of the dynamic ROM models is tested by comparing density forecasts with training data. Using the three different ROM density models the density was predicted for 5 days during quiet space weather conditions and during a geomagnetic storm in 2002. The resulting density forecast errors (the root-mean-square (RMS) percentage error on the three-dimensional spatial grid) and space weather conditions are shown in Figure 2. The predictions using the ROM model based on JB2008 are most accurate. This good performance can be explained by the superior space weather proxies used by the ROM-JB2008 model.”) ([Figure 7] Shows maps of mass density from reduced order data output by the model [Page 14 Par 1] “ROM models mimic their base model can also be seen in Figure 7, which shows maps of the modeled and estimated density on 8 August 2002 at 450-km altitude. For example, the simple density distribution in the JB2008 model is also visible in the ROM-JB2008 density, whereas the NRLMSISE-00- and TIEGCM-based ROM models show more complex density distributions. On the other hand, independent of the base model, the ROM-estimated densities have a similar magnitude, which indicates successful calibration.”) with an uncertainty quantification associated with the reduced order mass density map ([Figure 8, Figure 9] Show uncertainty output from the reduced order model [Page 14 Par 2] “Figures 8 and 9 show the uncertainty in the estimated density for different altitudes, latitudes, and local solar time. The uncertainty in the estimated density is smaller for lower altitude and inside the diurnal bulge. This indicates that the density estimation is more accurate when the drag signal is stronger” [Page 17 Par 3] “In addition to estimating the global density, we obtained estimates of the uncertainty in the density, see Figures 8 and 9. This information is very valuable for uncertainty quantification, which is needed for, for example, conjunction assessments.“ [Page 18 Par 4] “Furthermore, the ROM model was shown to be able to provide accurate density forecasts. The density estimates and predictions can therefore be used for both improved orbit determination and orbit prediction, which are fundamental for space situational awareness. Moreover, the technique provides estimates of the uncertainty in the density that can be used for uncertainty quantification for, for example, conjunction assessment”)
PNG
media_image6.png
507
932
media_image6.png
Greyscale
PNG
media_image7.png
439
1035
media_image7.png
Greyscale
for accurately predicting trajectories of satellites; ([Summary] “The trajectory of satellites that orbit the Earth at low altitudes (below 1,000 km) is affected by drag caused by the Earth's upper atmosphere. To accurately compute the effect of the drag on the orbit, the mass density of the atmosphere needs to be known. This density can however change quickly due to changing solar activity. To model the changes in the upper atmosphere, we developed a computationally inexpensive numerical model. Using the model and observations of satellite orbits, we derive information about the atmospheric density. These estimated densities can be used for improving atmospheric and orbit predictions.” [Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…” [Page 10 Par 1] “The performance of the dynamic ROM models is tested by comparing density forecasts with training data. Using the three different ROM density models the density was predicted for 5 days during quiet space weather conditions and during a geomagnetic storm in 2002. The resulting density forecast errors (the root-mean-square (RMS) percentage error on the three-dimensional spatial grid) and space weather conditions are shown in Figure 2. The predictions using the ROM model based on JB2008 are most accurate. This good performance can be explained by the superior space weather proxies used by the ROM-JB2008 model.”)
Note how the uncertainty graphs shown in figures 8 and 9 are produced by the same model as the reduced order density predictions, ROM-JB2008.
While Sherman makes obvious training a model using machine learning by modeling identification through a correlation of the input data to the output data, the machine learning model being trained; ([Col 10 line 45-53] “The machine learning service can employ a learning algorithm to build an inference model based on the data, which inference model can be applied to future OD requests and scenarios to automatically apply corrections to predicted OD or to notify a user of applicable corrections to be applied manually. The learning algorithm can be any type of learning algorithm such as, but not limited to, supervised learning (e.g., classification algorithm)”)
Claim Objections
Claims 11 and 20 are objected to because of the following informalities:
Claim 11 recites “the likelihood of collision,” no likelihood of collision was previously recited. It is recommended to either change the language to read “a likelihood of collision” or rewrite the claim to depend on claim 9, which introduces a likelihood of collision.
Claim 20 recites “determining a trajectory for the object based at least in part on the mass density map.” Since the words “reduced order” were added before every other recitation of the “mass density map,” it is recommended to add them here as well to avoid any possible issues with antecedent basis. This should now read “determining a trajectory for the object based at least in part on the reduced order mass density map.”
Claim 20 recites “the likelihood of collision,” no likelihood of collision was previously recited. It is recommended to either change the language to read “a likelihood of collision” or rewrite the claim to depend on claim 18, which introduces a likelihood of collision.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 5-6, 8-12, 14, 16, and 18-24 are rejected under 35 U.S.C. 101 because they are directed to an abstract idea without significantly more.
Claim 1 (Statutory Category – Process)
Step 2A – Prong 1: Judicial Exception Recited?
Yes, the claim recites a mental process, specifically:
MPEP 2106.04(a)(2)(Ill): “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, Judgments, and opinions.”
Further, the MPEP recites “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation.”
analyze High Accuracy Satellite Drag Model (HASDM) data associated with a HASDM model, the HASDM data including at least one or more solar drivers, one or more geomagnetic drivers, or one or more density maps and corresponding to two solar cycles;
Analyzing data at a high level of generality is a mental process that involves evaluating a set of data and making certain judgments about it. For example, business owners have looked at transaction logs to determine buying trends and their most popular products since the invention of written language. An analysis step recited at such a high level could be something as simple as observing a set of data and concluding that “there is a lot of data.” Limiting the data observed to only data that falls within a certain time frame does not change the capability of analyzing said data mentally, nor does specifying particular types of data that are included in the dataset.
identify an object to move to avoid a collision based at least in part on an analysis of the reduced order mass density map to determine an indication of a collision.
This kind of identification is a mental process equivalent to an observation and a judgement. For example, based on observation of data indicating that a first satellite or piece of space debris is accelerating towards a second satellite, a person could reasonably judge/conclude that the satellite should be moved to alter its trajectory to not come in contact with the debris/other satellite. Such identification “based on” an analysis of the reduced order density map could be as simple as observing the map, determining where the density is the highest based on this observation, and arbitrarily choosing a satellite in that high density zone that the person thinks should be moved somewhere with lower density.
The claim also recites a mathematic concept, in particular:
reduce dimensionality of the at least one subset of output data from the HASDM data
Mathematically reducing the dimensionality of data amounts to no more than a mathematic concept. The specification makes it clear that the claimed dimensionality reduction is a textual placeholder for proper orthogonal decomposition ([Par 28] “the present disclosure incorporates proper orthogonal decomposition for dimensionality reduction”) Further see [Par 52-56]. Note that proper orthogonal decomposition and principal component analysis are two terms to refer to the same technique.
train a density prediction model using machine learning by modeling identification through a correlation of the input data to the output data with reduced dimensionality, the density prediction model being trained to output a reduced order mass density map with an uncertainty quantification associated with the reduced order density map for accurately predicting trajectories of satellites
Numerically correlating input data to output data is a mathematic process, and therefore this training step is a mathematic concept. Specifying the particular output of this mathematic operation does not change the fact that it is math.
Additionally, see paragraph 34 of the specification that describes the process of training being tantamount to probability calculations ([Par 34] “[Par 34] “In some embodiments, Gaussian process regression (GPR) may be used because of its accuracy and robustness as a supervised machine learning technique. It is a nonparametric approach (e.g., does not take a functional form such as a polynomial) that calculates the probability distribution over all admissible functions that fit the data rather than calculating the probability distribution of parameters of a specific function. The output is assumed to have a multivariate Gaussian distribution, where the characteristics of the Gaussian model is dictated by the functional form of the covariance matrix or kernel. The training phase optimizes the free parameters of the covariance kernel such that the multivariate Gaussian best describes the distribution of the observed data points. GPR characterizes the response of a system or variable to changes in input conditions and can be used to predict the variable at a new set of input conditions using the posterior conditional probability.”)
Should it be found that this step is not a mathematic concept, it is also an example of mere instructions to apply.
Step 2A – Prong 2: Integrated into a Practical Solution?
Insignificant Extra-Solution Activity (MPEP 2106.05(g)) has found mere data gathering and
post solution activity to be insignificant extra-solution activity.
Data gathering:
determine input data based at least in part on the analysis of the HASDM data, the input data comprising at least one of one or more solar indices, one or more geomagnetic indices, a day of the year, a latitude, a longitude, or a time of the day
Getting input data is merely the act of gathering that data. As noted before, the analysis is a mental process, and obtaining this input from the results of that analysis merely consists of gathering data determined through said analysis
extract at least one subset of output data from the HASDM data
extracting a subset of data from a larger dataset merely consists of gathering that smaller dataset from the larger data pool.
Mere Instructions to Apply (MPEP 2106.05(f)) has found that merely applying a judicial exception such as an abstract idea, as by performing it on a computer, does not integrate the claim into a practical solution.
Mere Instructions to Apply:
train a density prediction model using machine learning by modeling identification through a correlation of the input data to the output data with reduced dimensionality, the density prediction model being trained to output a reduced order mass density map with an uncertainty quantification associated with the reduced order mass density map for accurately predicting trajectories of satellites
Applying a computer to train a machine learning model at a high level of generality and then use that model is simply the act of instructing a computer to perform generic functions to perform that training and subsequent use of the model, which is merely an instruction to apply a computer to the judicial exception. The claim only recites the idea of a solution or outcome, i.e. that the model is “trained” without reciting how this simulation is actually accomplished. Further, the computer elements claimed are cited as merely generic tools to perform the operations; for additional clarity see ([Par 77] “These numbers come from the number of HASDM prediction epochs previously discussed and the number of MC runs (1,000). HASDM-ML can perform these predictions in 17.27 seconds for CHAMP and 17.54 seconds for GRACE on a laptop with a NVIDIA GeForce GTX 1070 Mobile graphics card. Using CPU, the model takes 143 seconds for CHAMP and 152 seconds for GRACE. FIG. 12 shows HASDM and HASDM-ML orbit-averaged densities during four geomagnetic storms with confidence bounds and the associated calibration curves.” [Par 28] “The system of the present disclosure is trained on multiple gigabytes of data (e.g., two solar cycles) captured by the Space Environment Technologies (SET) corporation from the US Air Force Space Command (AFSPC) JSpOC's High Accuracy Satellite Drag Model (HASDM) for scientific research covering the period from 2001-2020 (presently; continuously growing” [Par 34] “In some embodiments, Gaussian process regression (GPR) may be used because of its accuracy and robustness as a supervised machine learning technique. It is a nonparametric approach (e.g., does not take a functional form such as a polynomial) that calculates the probability distribution over all admissible functions that fit the data rather than calculating the probability distribution of parameters of a specific function. The output is assumed to have a multivariate Gaussian distribution, where the characteristics of the Gaussian model is dictated by the functional form of the covariance matrix or kernel”)
Step 2B: Claim provides an Inventive Concept?
No, as discussed with respect to Step 2A, the additional limitations are mere data gathering or post solution activity (Insignificant Extra-Solution Activity), Well-Understood, Routine, Conventional Activity, or a general purpose computer and do not impose any meaningful limits on practicing the abstract idea and therefore the claim does not provide an inventive concept in Step 2B.
Data gathering:
determine input data based at least in part on the analysis of the HASDM data, the input data comprising at least one of one or more solar indices, one or more geomagnetic indices, a day of the year, a latitude, a longitude, or a time of the day
Getting input data is merely the act of gathering that data. As noted before, the analysis is a mental process, and obtaining this input from the results of that analysis merely consists of gathering data determined through said analysis
extract at least one subset of output data from the HASDM data
Extracting a subset of data from a larger dataset merely consists of gathering that smaller dataset from the larger data pool.
The courts have found that claim elements equivalent to merely gathering data are not indicative of integration into a practical application nor evidence of an Inventive concept (MPEP 2106.05(g)(Mere Data Gathering)(i) Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989);)
Mere Instructions to Apply (MPEP 2106.05(f)) has found that merely applying a judicial exception such as an abstract idea, as by performing it on a computer, does not integrate the claim into a practical solution.
Mere Instructions to Apply:
train a density prediction model using machine learning by modeling identification through a correlation of the input data to the output data with reduced dimensionality, the density prediction model being trained to output a reduced order mass density map with an uncertainty quantification associated with the reduced order mass density map for accurately predicting trajectories of satellites
Applying a computer to train a machine learning model at a high level of generality and then use that model is simply the act of instructing a computer to perform generic functions to perform that training and subsequent use of the model, which is merely an instruction to apply a computer to the judicial exception. The claim only recites the idea of a solution or outcome, i.e. that the model is “trained” without reciting how this simulation is actually accomplished. Further, the computer elements claimed are cited as merely generic tools to perform the operations; for additional clarity see ([Par 77] “These numbers come from the number of HASDM prediction epochs previously discussed and the number of MC runs (1,000). HASDM-ML can perform these predictions in 17.27 seconds for CHAMP and 17.54 seconds for GRACE on a laptop with a NVIDIA GeForce GTX 1070 Mobile graphics card. Using CPU, the model takes 143 seconds for CHAMP and 152 seconds for GRACE. FIG. 12 shows HASDM and HASDM-ML orbit-averaged densities during four geomagnetic storms with confidence bounds and the associated calibration curves.” [Par 28] “The system of the present disclosure is trained on multiple gigabytes of data (e.g., two solar cycles) captured by the Space Environment Technologies (SET) corporation from the US Air Force Space Command (AFSPC) JSpOC's High Accuracy Satellite Drag Model (HASDM) for scientific research covering the period from 2001-2020 (presently; continuously growing”)
The courts have found that such mere instructions to apply are not indicative of integration into a practical application nor recitation of significantly more than the judicial exception (MPEP 2106.05(f) “Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do "‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’". Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983”)
Additionally, the following elements are also examples of Well-Understood, Routine, Conventional Activity (WURC)
Well-Understood, Routine, Conventional Activity (WURC) has found that claim elements that are understood to be Well-Understood, Routine, Conventional Activity are not indicative of Integration into a Practical Solution nor evidence of an Inventive Concept (MPEP 2106.05(d))
WURC:
reduce dimensionality of the {data}
[Examiner’s note: proper orthogonal decomposition and principal component analysis refer to the same technique]
Gondelach ([Page 3 Par 2])
Proper orthogonal decomposition truncation method for data denoising and order reduction ([Abstract] [Page 1 Col 1 Par 1- Col 2 Par 2])
Principal component analysis: a review and recent developments ([Page 1 Par 1-2])
Principal Component Analysis in 3 Simple Steps ([Page 1 Par 1] [Page 2 Par 1][Page 2 Par 3])
train a density prediction model using machine learning … through a correlation of the input data to the output data
How Machine Learning Algorithms Work (they learn a mapping of input to output) ([Page 1 Par 1-10])
What Is Model Training ([Page 1 Par 1-3])
A Quick Overview on Machine Learning ([Page 4 Par 2])
Explained: Neural networks ([Page 2 Par 5])
Moreover, Mere Instructions To Apply An Exception (MPEP 2106.05(f)) has found that simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. In light of this, the additional generic computer component elements of “A system, comprising: at least one computing device; at least one application executable in the at least one computing device, wherein, when executed, the at least one application causes the at least one computing device to at least… ” are not sufficient to integrate a judicial exception into a practical application nor provide evidence of an inventive concept.
The additional elements have been considered both individually and as an ordered combination in the consideration of whether they constitute significantly more, and have been determined not to constitute such.
The claim is ineligible.
Claim 12 The elements of claim 12 are substantially the same as those of claim 1. Therefore, the elements of claim 12 are rejected due to the same reasons as outlined above for claim 1.
Claim 3 recites “wherein the output data comprises mass density on a three-dimensional grid.”
This merely clarifies the form of the output data, and is therefore merely an extension of the data gathering and mental process steps
Claim 5 recites “wherein, when executed, the at least one application causes the at least one computing device to reduce dimensionality of the HASDM data based at least in part on an application of principal component analysis (PCA).”
PCA reduces dimensionality using a mathematic algorithm, extracting simplified equations from complex mathematic relationships, as evidenced by ([Par 32] “PCA decomposes the spatial variations into mutually orthogonal, time-independent coherent structures or basis functions (BFs; also known as principal components - PCs or empirical orthogonal functions - EOFs) that capture the covariance in space. This has the effect of reducing the spatial dimensions or degrees of freedom. The BFs are likely to represent physical dynamical processes, however, it may not always be the case. The time-dependent coefficients represent the weighting for the BFs and capture the temporal variations as a function of the input drivers. The BFs, and hence the ROM, is valid for the conditions represented in the training ensemble.”) Therefore, using PCA to reduce the dimensionality of the data is merely a mathematic process
Further, doing such is an example of well-understood, routine, conventional activity. See below:
Gondelach ([Page 3 Par 2])
Proper orthogonal decomposition truncation method for data denoising and order reduction ([Abstract] [Page 1 Col 1 Par 1- Col 2 Par 2])
Principal component analysis: a review and recent developments ([Page 1 Par 1-2])
Principal Component Analysis in 3 Simple Steps ([Page 1 Par 1] [Page 2 Par 1][Page 2 Par 3])
Claim 6 recites “wherein performance of the density prediction model is improved based at least in part on an application of a loss function.”
A loss function a mathematic function, and using it is merely the process of applying a mathematic concept.
Claim 8 recites “wherein training the model comprises applying different combinations of input data.”
This merely clarifies what data the training is based upon, and is therefore merely an extension of the mathematic process and mere instructions to apply an exception.
Claim 9 recites “wherein, when executed, the at least one application further causes the at least one computing device to at least determine a likelihood of collision associated with the object based at least in part on the reduced order mass density map and object data.”
Determining the likelihood of an event is a mathematic process that is equivalent calculating the statistical probability of that event, and is therefore a mathematic concept.
Claim 10 recites “wherein the object comprises a satellite, and the object data comprises a location of the satellite.”
This merely clarifies the form of the object and its associated data, and is therefore merely an extension of the mathematic process.
Claim 11 recites “wherein, when executed, the at least one application further causes the at least one computing device to at least determine a trajectory for the given object based at least in part on the reduced order mass density map and the likelihood of collision”
Determining an ideal trajectory of something in order to avoid a collision with another object is a mental process that has been in wide use since the invention of seafaring ships and is done on a daily basis by millions of drivers on the road every day. For example, when changing lanes a driver on the way to work determines the correct trajectory for their vehicle so it does not collide with any other vehicle during the change.
Claim 14 The elements of claim 14 are substantially the same as those of claim 3. Therefore, the elements of claim 14 are rejected due to the same reasons as outlined above for claim 3.
Claim 16 The elements of claim 16 are substantially the same as those of claim 5. Therefore, the elements of claim 16 are rejected due to the same reasons as outlined above for claim 5.
Claim 18 The elements of claim 18 are substantially the same as those of claim 9. Therefore, the elements of claim 18 are rejected due to the same reasons as outlined above for claim 9.
Claim 19 recites “wherein the object comprises a satellite or debris object, and the object data comprises a location of the satellite or the debris object.”
This merely clarifies the form of the object and its associated data, and is therefore merely an extension of the mental and mathematic processes.
Claim 20 The elements of claim 20 are substantially the same as those of claim 11. Therefore, the elements of claim 20 are rejected due to the same reasons as outlined above for claim 11.
Claim 21 recites “wherein at least a portion of the input data is collected independently from the HASDM data.”
This element merely consists of collecting additional data, and is therefore merely the act of gathering data.
Claim 22 The elements of claim 22 are substantially the same as those of claim 21. Therefore, the elements of claim 22 are rejected due to the same reasons as outlined above for claim 21.
Claim 23 recites “wherein, when executed, the at least one application causes the at least one computing device to reduce dimensionality of the HASDM data based at least in part on an application of a convolution autoencoder (CAE).”
Using a CAE to reduce the dimensionality of data is merely the act of using a mathematic algorithm to reduce the data, and is therefore a mathematic concept.
Further, should it be found that this element is not a mathematic concept, it is also an example of well-understood, routine, conventional activity. See below:
Tutorial: Dimension Reduction – Autoencoders ([Page 2 Par 1-3])
PCA vs Autoencoders for Dimensionality Reduction ([Page 1 Par 1])
Generalized Autoencoder: A Neural Network Framework for Dimensionality Reduction ([Page 2 Col 1 Par 3])
Introduction to Dimensionality Reduction for Machine Learning ([Page 5 Par 13 – Page 6 Par 4])
Claim 24 The elements of claim 24 are substantially the same as those of claim 23. Therefore, the elements of claim 24 are rejected due to the same reasons as outlined above for claim 23.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5, 9-12, 14, 16, and 18-22 are rejected under 35 U.S.C. 103 as being unpatentable over Real-Time Thermospheric Density Estimation via Two-Line Element Data Assimilation (Hereinafter Gondelach) in view of High accuracy satellite drag model (HASDM) (Hereinafter HASDM) in further view of Sherman (US 11447273 B1) as well as Collision Probability Forecasting using a Monte Carlo Simulation (Hereinafter Duncan)
Claim 1. Gondelach makes obvious A system, comprising analyze ([Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density” [Page 6 Par 1] “From these TLE data, the state of an object (position and velocity) at any epoch can be extracted using the SGP4/SDP4 models (Hoots & Roehrich, 1980; Vallado et al., 2006). Hence, the effect of drag can be observed in TLE orbital data if the drag perturbation is strong enough.”) the([Section 2.2.1 Page 7 Par 1] “This is smaller than the 5-10% error often assumed for the drag coefficient (Emmert et al., 2006), but close to the 2-3% accuracy found for 30-year-averaged BC estimates (Bowman et al., 2004).” [Examiner’s note: a solar cycle is between 11 and 12 years. One of ordinary skill in the art would have recognized that a 30 year dataset would necessarily cover the timeframe of 2 solar cycles])determine input data ([Page 5 Par 4] “The space weather inputs uk used in the dynamical model are taken from the inputs required by the original density models, see second column in Table 2. In addition to these default inputs, we added the next-hour values for key space weather indices to improve the DMDc prediction” [Page 3 Par 4] “ In our case, the full state space consists of the neutral mass density values on a dense uniform grid in latitude, local solar time, and altitude.”) based at least in part on the analysis ([Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density” [Page 6 Par 1] “From these TLE data, the state of an object (position and velocity) at any epoch can be extracted using the SGP4/SDP4 models (Hoots & Roehrich, 1980; Vallado et al., 2006). Hence, the effect of drag can be observed in TLE orbital data if the drag perturbation is strong enough.” [Page 5 Par 4] “The space weather inputs uk used in the dynamical model are taken from the inputs required by the original density models, see second column in Table 2. In addition to these default inputs, we added the next-hour values for key space weather indices to improve the DMDc prediction” [Page 5 Par 1] “Furthermore, with respect to Mehta et al. (2018), we have improved the prediction performance of the linear model by including nonlinear space weather inputs”) ([Page 5 Par 4] “ In addition to these default inputs, we added the next-hour values for key space weather indices to improve the DMDc prediction” [Page 3 Par 4] “ In our case, the full state space consists of the neutral mass density values on a dense uniform grid in latitude, local solar time, and altitude.” ({a latitude, a time of the day} [Examiner’s note: this limitation is written in the alternative form. The unmapped elements are therefore not given patentable weight]) extract at least one subset of output data ([Page 12 Par 1] “Figure 5 shows the orbit-averaged estimated density along CHAMP's orbit as well as the density according to CHAMP data and the NRLMSISE-00 and JB2008 density models during August 2002 [Examiner’s note: the specification describes this output data as a density map [Specification Par 20] “…database comprises the input data (e.g., solar and geomagnetic drivers) and the output data (e.g., 3D density maps)…”) from the ([Page 6 Par 1] “From these TLE data, the state of an object (position and velocity) at any epoch can be extracted using the SGP4/SDP4 models (Hoots & Roehrich, 1980; Vallado et al., 2006). Hence, the effect of drag can be observed in TLE orbital data if the drag perturbation is strong enough.”) reduce dimensionality of the at least one subset of output data ([Page 2 Par 2] “The technique combines the predictive abilities of physics-based models with the computational speed of empirical models by developing a Reduced-Order Model (ROM) that represents the original high-dimensional system using a smaller number of parameters. The order reduction is achieved using proper orthogonal decomposition (POD) (Golub & Reinsch, 1970; Rowley et al., 2004), also known as principal component analysis (PCA)” [Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density… The density estimation using TLE data is achieved by simultaneously estimating the orbits and BCs of several objects and the reduced-order density state using an unscented Kalman filter” [Page 3 Par 1] “Accurate thermospheric density estimates are computed by assimilating TLE data in reduced-order density models.” [Page 3 Par 6] “First, to make the problem tractable, the state space dimension is reduced using POD.”)([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…” [Page 10 Par 1] “The performance of the dynamic ROM models is tested by comparing density forecasts with training data. Using the three different ROM density models the density was predicted for 5 days during quiet space weather conditions and during a geomagnetic storm in 2002. The resulting density forecast errors (the root-mean-square (RMS) percentage error on the three-dimensional spatial grid) and space weather conditions are shown in Figure 2. The predictions using the ROM model based on JB2008 are most accurate. This good performance can be explained by the superior space weather proxies used by the ROM-JB2008 model.”) ([Page 2 Par 2] “The technique combines the predictive abilities of physics-based models with the computational speed of empirical models by developing a Reduced-Order Model (ROM) that represents the original high-dimensional system using a smaller number of parameters. The order reduction is achieved using proper orthogonal decomposition (POD) (Golub & Reinsch, 1970; Rowley et al., 2004), also known as principal component analysis (PCA)” [Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density… The density estimation using TLE data is achieved by simultaneously estimating the orbits and BCs of several objects and the reduced-order density state using an unscented Kalman filter” [Page 3 Par 1] “Accurate thermospheric density estimates are computed by assimilating TLE data in reduced-order density models.” [Page 3 Par 6] “First, to make the problem tractable, the state space dimension is reduced using POD”) the density prediction model ([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…” [Page 10 Par 1] “The performance of the dynamic ROM models is tested by comparing density forecasts with training data. Using the three different ROM density models the density was predicted for 5 days during quiet space weather conditions and during a geomagnetic storm in 2002. The resulting density forecast errors (the root-mean-square (RMS) percentage error on the three-dimensional spatial grid) and space weather conditions are shown in Figure 2. The predictions using the ROM model based on JB2008 are most accurate. This good performance can be explained by the superior space weather proxies used by the ROM-JB2008 model.”) ([Figure 7] Shows maps of mass density from reduced order data output by the model [Page 14 Par 1] “ROM models mimic their base model can also be seen in Figure 7, which shows maps of the modeled and estimated density on 8 August 2002 at 450-km altitude. For example, the simple density distribution in the JB2008 model is also visible in the ROM-JB2008 density, whereas the NRLMSISE-00- and TIEGCM-based ROM models show more complex density distributions. On the other hand, independent of the base model, the ROM-estimated densities have a similar magnitude, which indicates successful calibration.”) with an uncertainty quantification associated with the reduced order mass density map ([Figure 8, Figure 9] Show uncertainty output from the reduced order model [Page 14 Par 2] “Figures 8 and 9 show the uncertainty in the estimated density for different altitudes, latitudes, and local solar time. The uncertainty in the estimated density is smaller for lower altitude and inside the diurnal bulge. This indicates that the density estimation is more accurate when the drag signal is stronger” [Page 17 Par 3] “In addition to estimating the global density, we obtained estimates of the uncertainty in the density, see Figures 8 and 9. This information is very valuable for uncertainty quantification, which is needed for, for example, conjunction assessments.“ [Page 18 Par 4] “Furthermore, the ROM model was shown to be able to provide accurate density forecasts. The density estimates and predictions can therefore be used for both improved orbit determination and orbit prediction, which are fundamental for space situational awareness. Moreover, the technique provides estimates of the uncertainty in the density that can be used for uncertainty quantification for, for example, conjunction assessment”)
PNG
media_image6.png
507
932
media_image6.png
Greyscale
PNG
media_image7.png
439
1035
media_image7.png
Greyscale
for accurately predicting trajectories of satellites ([Summary] “The trajectory of satellites that orbit the Earth at low altitudes (below 1,000 km) is affected by drag caused by the Earth's upper atmosphere. To accurately compute the effect of the drag on the orbit, the mass density of the atmosphere needs to be known. This density can however change quickly due to changing solar activity. To model the changes in the upper atmosphere, we developed a computationally inexpensive numerical model. Using the model and observations of satellite orbits, we derive information about the atmospheric density. These estimated densities can be used for improving atmospheric and orbit predictions.”) ([Figure 7] Shows maps of mass density from reduced order data output by the model [Page 14 Par 1] “ROM models mimic their base model can also be seen in Figure 7, which shows maps of the modeled and estimated density on 8 August 2002 at 450-km altitude. For example, the simple density distribution in the JB2008 model is also visible in the ROM-JB2008 density, whereas the NRLMSISE-00- and TIEGCM-based ROM models show more complex density distributions. On the other hand, independent of the base model, the ROM-estimated densities have a similar magnitude, which indicates successful calibration.”)
Gondelach does not explicitly teach at least one computing device; at least one application executable in the at least one computing device, wherein, when executed, the at least one application causes the at least one computing device to at least: process High Accuracy Satellite Drag Model (HASDM) data associated with a HASDM model, the HASDM data including at least one or more solar drivers, one or more geomagnetic drivers, or one or more density maps; analysis of the HASDM data; data from the HASDM data; training a model using machine learning by modeling identification through a correlation of the input data to the output data, the machine learning model being trained; identify an object to move to avoid a collision based on an analysis to determine an indication of a collision.
HASDM makes obvious ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”) the HASDM data including at least one or more solar drivers, one or more geomagnetic drivers, or one or more density maps; ([Page 2498 Col 1 Par 3] “This project also included the development of a prediction model that maps the time series of solar and geomagnetic indices (including E10.7) to the density correction parameters estimated by DCA.” [Figs. 4 and 5] show density maps)
PNG
media_image8.png
717
564
media_image8.png
Greyscale
analysis of the HASDM data; data from the HASDM data; ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”)
HASDM is analogous art because it is within the field of satellite motion analysis. It would have been obvious to combine it with Gondelach before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to take advantage of more accurate data and therefore make a more accurate analysis. As noted by Gondelach, HASDM is the most accurate, military standard model for satellite tracking and motion prediction, however its use in the public sector has been limited due to a lack of access to HASDM by many researchers who must resort to inferior two-line element (TLE) data ([Page 2 Par 1] “The current Air Force standard is the High Accuracy Satellite Drag Model (HASDM) (Storz et al., 2005), which is an empirical model that is calibrated using observations of calibration satellites. These satellite observations are used to determine atmospheric model parameters based on their orbit determination solutions. Due to the lack of access to space surveillance observations, publicly available two-line element (TLE) data have been used…”) Therefore, it would have been obvious to one of ordinary skill in the art that having access to HASDM data would allow a satellite tracking and motion prediction system to take advantage of significantly more accurate source data, and therefore enabling the system to perform much more accurate analyses and predictions. By taking atmospheric conditions into account, HASDM enables automatic calibration and adjustment of satellite data to ensure maximum accuracy ([Page 2504 Col Par 4-5] “Atmospheric density models for computing drag forces on satellites are a major source of inaccuracy in trajectory predictions for low-perigee satellites. This deficiency can result in serious errors in the predicted position of satellites, especially those with perigees below 600 km altitude, the layer known as the thermosphere. Many of these objects are of high interest to Space Control missions. Current thermospheric density models do not adequately account for dynamic changes in atmospheric drag for orbit predictions, and no significant operational improvements have been made since 1970. Lack of progress is largely due to poor model inputs in the form of crude heating indices, as well as poor model resolution, both spatial and temporal. The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”) Overall, one of ordinary skill in the art would have recognized that by combining Gondelach with HASDM, they could produce a significantly more accurate system
The combination of Gondelach and HASDM does not explicitly teach at least one computing device; at least one application executable in the at least one computing device, wherein, when executed, the at least one application causes the at least one computing device to at least: perform operations; training a model using machine learning by modeling identification through a correlation of the input data to the output data, the machine learning model being trained; identify an object to move to avoid a collision based on an analysis to determine an indication of a collision.
Sherman makes obvious at least one computing device; at least one application executable in the at least one computing device, wherein, when executed, the at least one application causes the at least one computing device to at least: perform operations; ([Col 14 line 11-15] “ The computing environment 400 includes one or more processing units 410, 415 and memory 420, 425. In FIG. 4, this basic configuration 430 is included within a dashed line. The processing units 410, 415 execute computer-executable instructions”) training a model using machine learning by modeling identification through a correlation of the input data to the output data, the machine learning model being trained; ([Col 10 line 45-53] “The machine learning service can employ a learning algorithm to build an inference model based on the data, which inference model can be applied to future OD requests and scenarios to automatically apply corrections to predicted OD or to notify a user of applicable corrections to be applied manually. The learning algorithm can be any type of learning algorithm such as, but not limited to, supervised learning (e.g., classification algorithm)”)
Sherman is analogous art because it is within the field of satellite management. It would have been obvious to one of ordinary skill in the art to combine it with Gondelach and HASDM before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to better predict the trajectories of satellites, particularly satellites operated by small, independent agencies. For such small, independent operators, having to calculate orbital characteristics can be a burdensome process ([Col 1 line 5-30] “Satellites are increasingly employed by various independently operating entities (e.g., businesses, universities, or governments) for applications such as weather, surface imaging, communications, data transmission, space measurements, geosynchronous positioning, etc. In many examples, the owner or operator of the satellite is primarily concerned with the payload operation (e.g., the function performed by the satellite), which generally requires establishing a communication link between a satellite ground station and the orbiting satellite for transmission of data therebetween. In order to establish a communication link, the satellite ground station performs acquisition of signal (AoS) based on the location of the satellite in its orbit at a specific time. The owner or operator of the satellite thus has to provide to the ground station service an orbit determination (OD) for the satellite. OD is the empirical estimation of a satellite's trajectory determined using statistical methods, physical force and acceleration models, and sensor measurements. OD can then be used to produce an accurate ephemerides (e.g., table or data file of calculated satellite positions) and to produce, from the ephemerides, acquisition products (e.g., two-line element (TLE) set that encodes orbital elements, orbit ephemeris message (OEM) that specifies a position and velocity of an object at multiple epochs within a given time range, an improved inter-range vector (IIRV), etc.)”) To this end, Sherman presents a system that allows a better-equipped third-party service to calculate orbital characteristics automatically, allowing small scale missions and operators to obtain higher quality tracking ([Col 1 line 55- Col 2 line 26] “Instead of users (e.g., independent owners/operators of different satellites) having to calculate orbit determination (OD) for each satellite (or spacecraft) themselves, an orbit determination service automatically calculates OD based on a user request. The user can provide information of any kind that particularly identifies the satellite, for example, an assigned satellite identifier (e.g., an ID number assigned by a ground station service provider, a tracking or ID number assigned by a governmental agency such as North American Aerospace Defense Command (NORAD) or National Aeronautics and Space Administration (NASA), etc.), point angles (e.g., time and orientation for detection by antenna of a ground station of radio frequency (RF) radiation from the satellite), ranging of the satellite (e.g., Doppler shift of frequency of RF radiation from satellite, 1-way or 2-way radiometric ranging, etc.), global positioning system (GPS) telemetry of a beacon of the satellite, imaging of the satellite (e.g., high-resolution timed exposure of the night sky through which the satellite travels), etc. Based on the last-known best state of the identified satellite, the OD service predicts, with a net uncertainty, a state of the satellite using physics models. The OD service can then use measurement data to correct the predicted state to yield the requested OD. The measurement data can be provided by the user as part of the initial request, retrieved from third-party operators or databases (e.g., NORAD Space-Track, measurements available in industry, etc.), or obtained by direct measurement via a corresponding satellite ground station service or network of satellite ground stations. For example, the measurement data types may be similar to those used to identify the satellite to the OD service. However, where the user may have otherwise been limited to just their own measurement data for correcting predicted OD, the OD service is able to rely on a larger pool of measurement data for correction, including passive (i.e., not actively obtained for a particular satellite) measurements from third parties as well as measurement data provided by other users to the OD service.”) Overall, one of ordinary skill in the art would have recognized that combining Sherman with Gondelach and HASDM would result better satellite trajectory determination, particularly for small-scale missions and operators to which high quality tracking and prediction would not have been previously available.
The combination of Gondelach, HASDM, and Sherman does not explicitly teach i identify an object to move to avoid a collision based on an analysis to determine an indication of a collision.
Duncan makes obvious identify an object to move to avoid a collision based on an analysis to determine an indication of a collision. ([Section 2 Page 2 Par 1-2] “Satellite close approach predictions are produced daily by Air Force personnel at the Joint Space Operations Center (JSpOC) at Vandenberg AFB. If two objects are predicted to come within some separation threshold, JSpOC personnel will issue a warning report and notify the appropriate satellite operator. The JSpOC provides various data products to the operator so that the collision risk can be established. Quantifying the collision threat typically involves include computing the collision probability, estimating how the probability will evolve, and trending various event parameters to establish consistency among solutions. Once a high-risk conjunction event is identified, various avoidance scenarios are generated.”)
Duncan is analogous art because it is within the field of satellite simulation. It would have been obvious to one of ordinary skill in the art to combine it with Gondelach, HASDM, and Sherman before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination to provide a more accurate simulation and therefore prediction of the path of space debris and satellites, allowing for better collision avoidance. As stated by Duncan ([Abstract] “SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn led to a greater need for accurate, near real-time knowledge of all space activities. With the continued growth of the orbital debris population, high-risk conjunction events are occurring more often. Consequently, satellite operators are performing collision avoidance maneuvers more frequently. Since any type of maneuver expends fuel and reduces the operational lifetime of the spacecraft, using fuel to perform collision avoidance maneuvers often times leads to a difficult trade between sufficiently reducing the risk while satisfying the operational needs of the mission. Thus the need for new, more sophisticated collision risk management and collision avoidance methods must be implemented.”) As succinctly put by Duncan, the increasing quantity of objects in orbit has the dual effect of making accidental collision more likely and increasing the complexity of the simulations required to make predictions of future positions of these objects. To overcome these issues, Duncan presents a system that integrates probability-based simulations to accurately and efficiently model the intricacies of the interactions between objects in orbit, notably using a Monte Carlo simulation ([Introduction] “The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action.. As the time horizon to the conjunction event is reduced, the collision probability changes… constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty.”) It would have been obvious to one of ordinary skill in the art that combining the features of Duncan with those of Gondelach, HASDM, and Sherman would produce a system with much greater satellite trajectory prediction accuracy without a significant hit to processing requirements.
Claim 12. The elements of claim 12 are substantially the same as those of claim 1. Therefore, the elements of claim 12 are rejected due to the same reasons as outlined above for claim 1.
Claim 3. Gondelach makes obvious wherein the output data comprises mass density on a three-dimensional grid. ([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices, namely the empirical NRLMSISE-00 (Picone et al., 2002) and Jacchia-Bowman 2008 (JB2008) models (Bowman et al., 2008) and the physics-based TIE-GCM model (Qian et al., 2014 ). We first defined a spatial grids in local solar time, geographic latitude, and altitude (we use local solar time instead of longitude as azimuthal coordinate, because the diurnal bulge is stationary in solar local time) and computed the density on this grid for every hour over 12 years ( one solar cycle),”)
Claim 14. The elements of claim 14 are substantially the same as those of claim 3. Therefore, the elements of claim 14 are rejected due to the same reasons as outlined above for claim 3.
Claim 5. Gondelach makes obvious ([Page 2 Par 2] “ The order reduction is achieved using proper orthogonal decomposition (POD) (Golub & Reinsch, 1970; Rowley et al., 2004), also known as principal component analysis (PCA)”)
HASDM makes obvious ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”)
Sherman makes obvious when executed, the at least one application causes the at least one computing device to perform operations; ([Col 14 line 11-15] “ The computing environment 400 includes one or more processing units 410, 415 and memory 420, 425. In FIG. 4, this basic configuration 430 is included within a dashed line. The processing units 410, 415 execute computer-executable instructions”)
Claim 16. The elements of claim 16 are substantially the same as those of claim 5. Therefore, the elements of claim 16 are rejected due to the same reasons as outlined above for claim 5.
Claim 9. Gondelach makes obvious ([Figure 7] Shows maps of mass density from reduced order data output by the model [Page 14 Par 1] “ROM models mimic their base model can also be seen in Figure 7, which shows maps of the modeled and estimated density on 8 August 2002 at 450-km altitude. For example, the simple density distribution in the JB2008 model is also visible in the ROM-JB2008 density, whereas the NRLMSISE-00- and TIEGCM-based ROM models show more complex density distributions. On the other hand, independent of the base model, the ROM-estimated densities have a similar magnitude, which indicates successful calibration.” [Summary] “To model the changes in the upper atmosphere, we developed a computationally inexpensive numerical model. Using the model and observations of satellite orbits, we derive information about the atmospheric density.”)
Sherman makes obvious wherein, when executed, the at least one application further causes the at least one computing device to at least ([Col 14 line 11-15] “ The computing environment 400 includes one or more processing units 410, 415 and memory 420, 425. In FIG. 4, this basic configuration 430 is included within a dashed line. The processing units 410, 415 execute computer-executable instructions”)
Duncan makes obvious to at least determine a likelihood of collision associated with a given object ([Section 1 Page 1 Par 2] “ Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. “) based at least in part on object data. ([Section 4.2 Page 7 Par 3] “The required inputs are state and covariance data for two objects with a predicted close approach n days into the future and a predictive covariance matrix for each object. State data is obtained from the latest orbit determination and then propagated to TCA.”)
Claim 18. The elements of claim 18 are substantially the same as those of claim 9. Therefore, the elements of claim 18 are rejected due to the same reasons as outlined above for claim 9.
Claim 10. Duncan makes obvious wherein the object comprises a satellite, ([Section 2 Page 2 Par 1] “Satellite close approach predictions are produced daily by Air Force personnel at the Joint Space Operations Center (JSpOC) at Vandenberg AFB. If two objects are predicted to come within some
separation threshold, JSpOC personnel will issue a warning report and notify the appropriate satellite operator. The JSpOC provides various data products to the operator so that the collision risk can be established.”) and the object data ([Section 4.2 Page 7 Par 3] “The required inputs are state and covariance data for two objects with a predicted close approach n days into the future and a predictive covariance matrix for each object. State data is obtained from the latest orbit determination and then propagated to TCA.”) comprises a location of the satellite ([Section 4.2 Page 6 Par 1 – Page 7 Par 1] “The possible trajectories of the objects is determined by sampling their positions over the covariance using the latest state uncertainty information”)
Claim 11. Gondelach makes obvious ([Figure 7] “Maps of modeled and estimated density at 450-km latitude on 8 August 2002 at 0:00:00 UTC” [Page 13 Par 3 – Page 14 Par 1] “The way that the ROM models mimic their base model can also be seen in Figure 7 that shows maps of the modelled and estimated density on August 8, 2002 at 450 km altitude. For example, the simple density distribution in the JB2008 model is also visible in the ROM-JB2008 density, whereas the NRLMSISE-00 and TIEGCM based ROM models show more complex density distributions.” [Summary] “To model the changes in the upper atmosphere, we developed a computationally inexpensive numerical model. Using the model and observations of satellite orbits, we derive information about the atmospheric density.”)
Sherman makes obvious wherein, when executed, the at least one application further causes the at least one computing device to at least ([Col 14 line 11-15] “ The computing environment 400 includes one or more processing units 410, 415 and memory 420, 425. In FIG. 4, this basic configuration 430 is included within a dashed line. The processing units 410, 415 execute computer-executable instructions”)
Duncan makes obvious to at least determine a trajectory for the given object based at least in part on the likelihood of collision ([Section 4.2 Page 6 Par 1 – Page 7 Par 1] “The possible trajectories of the objects is determined by sampling their positions over the covariance using the latest state uncertainty information. This technique is similar to the technique used to compute the Pc via a Monte Carlo simulation. The forecasting algorithm then takes this a step further by calculating a Pc value for each Monte Carlo trial.” [Examiner’s note: PC is the collision probability) in order to avoid a collision. ([Section 2 Page 2 Par 2] “Once a high-risk conjunction event is identified, various avoidance scenarios are generated.)
Claim 20. The elements of claim 20 are substantially the same as those of claim 11. Therefore, the elements of claim 20 are rejected due to the same reasons as outlined above for claim 11.
Claim 19. Duncan makes obvious wherein the object comprises a satellite or a debris object([Section 2 Page 2 Par 2] “Satellite close approach predictions are produced daily by Air Force personnel at the Joint Space Operations Center (JSpOC) at Vandenberg AFB. If two objects are predicted to come within some separation threshold, JSpOC personnel will issue a warning report and notify the appropriate satellite operator. The JSpOC provides various data products to the operator so that the collision risk can be established.” ({satellite} [Examiner’s note: this limitation is written in the alternative form. The unmapped elements are therefore not given patentable weight]) and the object data ([Section 4.2 Page 7 Par 3] “The required inputs are state and covariance data for two objects with a predicted close approach n days into the future and a predictive covariance matrix for each object. State data is obtained from the latest orbit determination and then propagated to TCA.”) comprises a location of the satellite or the debris object. ([Section 4.2 Page 6 Par 1 – Page 7 Par 1] “The possible trajectories of the objects is determined by sampling their positions over the covariance using the latest state uncertainty information” ({satellite} [Examiner’s note: this limitation is written in the alternative form. The unmapped elements are therefore not given patentable weight])
Claim 21. Gondelach makes obvious wherein at least a portion of the input data is collected independently from the ([Page 6 Par 2] “This generally improves the TLE accuracy at epoch, but may deteriorate the quality if inaccurate future space weather is used for the orbit prediction. To gain understanding about errors in TLE data, we compared the position according to TLE data against GPS data, see Figure 1. For this, we used the GPS data of a Planet Labs satellite at 494 km altitude.”)
HASDM makes obvious HASDM data ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”)
Claim 22. The elements of claim 22 are substantially the same as those of claim 21. Therefore, the elements of claim 22 are rejected due to the same reasons as outlined above for claim 22.
(2) Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Real-Time Thermospheric Density Estimation via Two-Line Element Data Assimilation (Hereinafter Gondelach) in view of High accuracy satellite drag model (HASDM) (Hereinafter HASDM) in further view of Sherman (US 11447273 B1) as well as Collision Probability Forecasting using a Monte Carlo Simulation (Hereinafter Duncan) and Chen (CN 110519233 A)
Claim 6. Gondelach makes obvious wherein performance of the density prediction model([Page 5 Par 2] “ In this work, we have developed three different ROM density models using three different atmospheric models to obtain the snapshot matrices…”)
The combination of Gondelach, HASDM, Sherman, and Duncan fails to make obvious wherein a process is improved based at least in part on an application of a loss function.
Chen makes obvious wherein a process is improved based at least in part on an application of a loss function ([Page 6 Par 1] “the target of the coder is distributed to make the output close to the true label, so the target of D is to reduce cross-entropy, namely:” [Equation 1 (highlighted on page 14)] [[Page 6 Par 3] “…the target decoder is the encoder considered as possible data reconstructed by the decoder is real data, even if the cross entropy as large as possible for data reconstruction by the decoder as close as possible to the original data, adding MSE constraint in the loss function of the decoder, as follows: [Equation 3 (highlighted on page 14)] [Page 6 Par 4-5] “A-CCR network training, the pre-processed sensor data as a training set of the network, using the BP algorithm to train the network, the algorithm is as follows: algorithm, x is the training set of input, z is a result of compression encoding network output, Loss-D is the target function, Loss-G of the network coding portion is the target function of network decoding portion. A-CCR network after finishing the algorithm combat training, has good compression and reconstruction performance on the sensor network data, at the same time, it has good migration capability”)
Chen is analogous art because it is within the field of artificial intelligence as applied to satellite operations. It would have been obvious to one of ordinary skill in the art to combine it with Gondelach, HASDM, Sherman, and Duncan before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to make coordination between satellites easier by improving data compression and therefore speeding up communication, as well as reducing power consumption. As stated by Chen, satellite communication, especially in the case of sensor data, can be very costly in terms of both time and power consumption due to the volume of data being transferred ([Page 2 Par 2] “Research on the spacecraft, has good reliability, the WSN performance effectiveness and real time becomes one of the key technology of our country is very important in military and aerospace industry development. the main reason in most cases, sensor node plate carrying the radio transceiver is energy consumption. the energy problem is always the bottleneck of limited wireless sensor network widely used. Therefore, the WSN, how to design, save energy consumption and eliminate redundant data compression scheme is critical.”) To overcome this problem, Chen introduces a system that uses an artificial intelligence-based data compression method to reduce the size of the data that is transmitted and received between satellites and ground-based stations, allowing for rapid, real time communication. ([Page 2 Par 3] “data compression may by reducing the data amount of the WSN, effectively saving energy consumption communication and storage… When the original data based on the sparse, CS method can use fewer measurements to recover original data. Because of using sparse binary matrix, CS can greatly reduce the system cost. [Page 2 Par 4] “large deep convolutional network has also been applied to data compression, but most research currently only limited to the field of image compression. layer for wireless sensor network, most people use the RBM or fully connected, to study the compression network and reconfigurable network, there is little human network combines the compressed reconstructed together against learning…. Therefore, to find a high-efficient deep learning model is applied to algorithm for satellite wireless sensor data compression is necessary.”) One of ordinary skill in the art would have recognized that combining the elements of Chen with those of Gondelach, HASDM, Sherman, and Duncan would allow for more rapid data communication, allowing accurate data to be rapidly streamed to base stations, enabling the models produced by the combination of Gondelach, HASDM, Sherman, and Duncan to take advantage of up to date, real time data, allowing for the produced density maps to be even more precise and accurate to the real-life conditions they model.
Claim 8. Sherman makes obvious wherein training the model ([Col 10 line 45-53] “The machine learning service can employ a learning algorithm to build an inference model based on the data, which inference model can be applied to future OD requests and scenarios to automatically apply corrections to predicted OD or to notify a user of applicable corrections to be applied manually. The learning algorithm can be any type of learning algorithm such as, but not limited to, supervised learning (e.g., classification algorithm)”)
Chen makes obvious training comprises applying different combinations of input data ([Page 3 Par 6] “step four: the pre-extracting data time sequence of one class of sensor data after processing in the data time sequence of this type is divided into m segments and random scramble sequence. using the m sections data time sequence of this type according to the sequence after the A-CCR network performing circular iterative training, after reaching the preset iteration times, obtain the initial model of the A-CCR network,”)
Chen is analogous art because it is within the field of artificial intelligence as applied to satellite operations. It would have been obvious to one of ordinary skill in the art to combine it with Gondelach, HASDM, Sherman, and Duncan before the effective filing date. One of ordinary skill in the art would have been motivated to make this combination in order to make coordination between satellites easier by improving data compression and therefore speeding up communication, as well as reducing power consumption. As stated by Chen, satellite communication, especially in the case of sensor data, can be very costly in terms of both time and power consumption due to the volume of data being transferred ([Page 2 Par 2] “Research on the spacecraft, has good reliability, the WSN performance effectiveness and real time becomes one of the key technology of our country is very important in military and aerospace industry development. the main reason in most cases, sensor node plate carrying the radio transceiver is energy consumption. the energy problem is always the bottleneck of limited wireless sensor network widely used. Therefore, the WSN, how to design, save energy consumption and eliminate redundant data compression scheme is critical.”) To overcome this problem, Chen introduces a system that uses an artificial intelligence-based data compression method to reduce the size of the data that is transmitted and received between satellites and ground-based stations, allowing for rapid, real time communication. ([Page 2 Par 3] “data compression may by reducing the data amount of the WSN, effectively saving energy consumption communication and storage… When the original data based on the sparse, CS method can use fewer measurements to recover original data. Because of using sparse binary matrix, CS can greatly reduce the system cost. [Page 2 Par 4] “large deep convolutional network has also been applied to data compression, but most research currently only limited to the field of image compression. layer for wireless sensor network, most people use the RBM or fully connected, to study the compression network and reconfigurable network, there is little human network combines the compressed reconstructed together against learning…. Therefore, to find a high-efficient deep learning model is applied to algorithm for satellite wireless sensor data compression is necessary.”) One of ordinary skill in the art would have recognized that combining the elements of Chen with those of Gondelach, HASDM, Sherman, and Duncan would allow for more rapid data communication, allowing accurate data to be rapidly streamed to base stations, enabling the models produced by the combination of Gondelach, HASDM, Sherman, and Duncan to take advantage of up to date, real time data, allowing for the produced density maps to be even more precise and accurate to the real-life conditions they model.
(3) Claims 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Real-Time Thermospheric Density Estimation via Two-Line Element Data Assimilation (Hereinafter Gondelach) in view of High accuracy satellite drag model (HASDM) (Hereinafter HASDM) in further view of Sherman (US 11447273 B1) as well as Collision Probability Forecasting using a Monte Carlo Simulation (Hereinafter Duncan) and Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems (Hereinafter Gonzalez)
Claim 23. Gondelach teaches to reduce dimensionality of the ([Page 2 Par 2] “The technique combines the predictive abilities of physics-based models with the computational speed of empirical models by developing a Reduced-Order Model (ROM) that represents the original high-dimensional system using a smaller number of parameters. The order reduction is achieved using proper orthogonal decomposition (POD) (Golub & Reinsch, 1970; Rowley et al., 2004), also known as principal component analysis (PCA)” [Page 2 Par 4] “In this work, the reduced-order modeling technique for density estimation is further developed and TLE data are used to estimate the thermospheric density… The density estimation using TLE data is achieved by simultaneously estimating the orbits and BCs of several objects and the reduced-order density state using an unscented Kalman filter” [Page 3 Par 1] “Accurate thermospheric density estimates are computed by assimilating TLE data in reduced-order density models.” [Page 3 Par 6] “First, to make the problem tractable, the state space dimension is reduced using POD.”)
HASDM makes obvious the HASDM data ([Abstract] “The Air Force Space Battlelab’s High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites.” [Page 2504 Col 1 Par 5] “The High Accuracy Satellite Drag Model (HASDM) initiative uses the Dynamic Calibration Atmosphere (DCA) algorithm to solve for thermospheric neutral density near real-time from the observed drag effects on a set of low-perigee inactive payloads and debris, referred to as calibration satellites. Many different calibration satellites with different orbits may be exploited to recover a dynamically varying global density field. The greater the number of calibration satellites, the better the accuracy. For this initiative, we used up to 75 such satellites.”)
Sherman makes obvious wherein, when executed, the at least one application causes the at least one computing device to ([Col 14 line 11-15] “ The computing environment 400 includes one or more processing units 410, 415 and memory 420, 425. In FIG. 4, this basic configuration 430 is included within a dashed line. The processing units 410, 415 execute computer-executable instructions”)
The combination of Gondelach, HASDM, Sherman, and Duncan fails to explicitly teach reduce dimensionality of data based at least in part on an application of a convolution autoencoder (CAE).
Gonzalez makes obvious reduce dimensionality of data based at least in part on an application of a convolution autoencoder (CAE). ([Abstract] “In this work we propose a deep learning-based strategy for nonlinear model reduction that is inspired by projection-based model reduction where the idea is to identify some optimal low-dimensional representation and evolve it in time. Our approach constructs a modular model consisting of a deep convolutional autoencoder and a modified LSTM network. The deep convolutional autoencoder returns a low-dimensional representation in terms of coordinates on some expressive nonlinear data-supporting manifold”)
Gonzalez is analogous art because it is within the field of machine learning-based data analysis and data simplification. It would have been obvious to one of ordinary skill in the art to combine Gonzalez with Gondelach, HASDM, Sherman, and Duncan before the effect filing date. One of ordinary skill in the art would have been motivated to make this combination in order to make the simulation more efficient by reducing the data that needs to be processed, as well as overcoming previously known issues with conventional dimensionality-reduction systems ([Abstract] “Model reduction of high-dimensional dynamical systems alleviates computational burdens faced in various tasks from design optimization to model predictive control. One popular model reduction approach is based on projecting the governing equations onto a subspace spanned by basis functions obtained from the compression of a dataset of solution snapshots. However, this method is intrusive since the projection requires access to the system operators. Further, some systems may require special treatment of nonlinearities to ensure computational efficiency or additional modeling to preserve stability. In this work we propose a deep learning-based strategy for nonlinear model reduction that is inspired by projection-based model reduction where the idea is to identify some optimal low-dimensional representation and evolve it in time. Our approach constructs a modular model consisting of a deep convolutional autoencoder and a modified LSTM network. The deep convolutional autoencoder returns a low-dimensional representation in terms of coordinates on some expressive nonlinear data-supporting manifold”) Overall, one of ordinary skill in the art would have recognized that combining Gonzalez with Gondelach, HASDM, Sherman, and Duncan would result in a system that is significantly more efficient.
Claim 24. The elements of claim 24 are substantially the same as those of claim 23. Therefore, the elements of claim 24 are rejected due to the same reasons as outlined above for claim 23.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael P Mirabito whose telephone number is (703)756-1494. The examiner can normally be reached M-F 10:30 am - 6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at (571) 272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.P.M./ Examiner, Art Unit 2187
/EMERSON C PUENTE/ Supervisory Patent Examiner, Art Unit 2187