Prosecution Insights
Last updated: April 19, 2026
Application No. 18/313,907

NEURAL GRAPH REVEALERS

Non-Final OA §101§103
Filed
May 08, 2023
Examiner
CHOI, DAVID E
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
448 granted / 595 resolved
+20.3% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
18 currently pending
Career history
613
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
1.9%
-38.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. 2. This action is responsive to the following communication: Original claims filed 5/8/23 . This action is made non-final. 3. Claims 1-20 are pending in the case. Claims 1, 14 and 19 are independent claims. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claim 1 is a method type claim. Claim 14 is a system type claim. Claim 19 is a method claim. Therefore, claims 1-20 are directed to either a process, machine, manufacture or composition of matter. With respect to claim 1: 2A Prong 1: obtaining data including a collection of samples and associated sample features (mental process – a user can obtain a collection of samples); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features while satisfying one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features while satisfying one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 2 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the sample features include two or more types of data including two or more of numerical, categorical, discrete, and continuous data. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the sample features include two or more types of data including two or more of numerical, categorical, discrete, and continuous data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 3 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the neural network is a fully connected multi-layer perceptron (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the neural network is a fully connected multi-layer perceptron. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 4 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the neural network is a fully connected neural network including one or more hidden layers having a plurality of paths between a given set of input features and a given set of output features corresponding to the given set of input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the neural network is a fully connected neural network including one or more hidden layers having a plurality of paths between a given set of input features and a given set of output features corresponding to the given set of input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 5 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein learning dependencies between the set of input features and the set of output features includes determining direct connections between pairs of features from the set of input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein learning dependencies between the set of input features and the set of output features includes determining direct connections between pairs of features from the set of input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 6 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the one or more sparsity constraints include a sparsity constraint restricting any input feature from being directly connected via a path through the neural network with a same input feature (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the one or more sparsity constraints include a sparsity constraint restricting any input feature from being directly connected via a path through the neural network with a same input feature (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 7 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the one or more sparsity constraints include a sparsity constraint restricting a number of paths that give direct dependencies between the respective features. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the one or more sparsity constraints include a sparsity constraint restricting a number of paths that give direct dependencies between the respective features. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 8 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: modifying the sparsity constraint restricting a number of paths through the neural network representing direct dependencies between respective features; and reapplying the neural network to learn a refined regression that correlates the set of input features to the set of output features while satisfying the modified sparsity constraint (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: modifying the sparsity constraint restricting a number of paths through the neural network representing direct dependencies between respective features; and reapplying the neural network to learn a refined regression that correlates the set of input features to the set of output features while satisfying the modified sparsity constraint (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 9 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein modifying the sparsity constraint includes reducing a number of direct connections in the regression model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein modifying the sparsity constraint includes reducing a number of direct connections in the regression model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 10 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the nodes in the regression model include rectified linear unit functions or other nonlinear functions configured to jointly discover feature dependency graph constraints while fitting the regression model on the set of input features with both the input and output of the neural network being the obtained dat a (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the nodes in the regression model include rectified linear unit functions or other nonlinear functions configured to jointly discover feature dependency graph constraints while fitting the regression model on the set of input features with both the input and output of the neural network being the obtained dat a (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 11 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein learning the regression model includes recovering an adjacency matrix indicating direct connections between input features and respective output features while satisfying the one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein learning the regression model includes recovering an adjacency matrix indicating direct connections between input features and respective output features while satisfying the one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 1 2 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein learning the regression model includes learning a function for each feature from the set of output features by fitting a regression to the obtained data. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein learning the regression model includes learning a function for each feature from the set of output features by fitting a regression to the obtained data. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 13 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the trained regression model represents the underlying probabilistic distribution and supports probabilistic inference and sampling (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the trained regression model represents the underlying probabilistic distribution and supports probabilistic inference and sampling (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). Step 1: Claim 1 is a method type claim. Claim 14 is a system type claim. Claim 19 is a method claim. Therefore, claims 1-20 are directed to either a process, machine, manufacture or composition of matter. With respect to claim 14: 2A Prong 1: obtaining data including a collection of samples and associated sample features (mental process – a user can obtain a collection of samples); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: at least one processor; memory in electronic communication with the at least one processor; and instructions stored in the memory, the instructions being executable by the at least one processor to (mere instructions to apply the exception using a generic computer component); identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features while satisfying one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: at least one processor; memory in electronic communication with the at least one processor; and instructions stored in the memory, the instructions being executable by the at least one processor to (mere instructions to apply the exception using a generic computer component); identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features while satisfying one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 15 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the neural network is a fully connected multi-layer perceptron. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the neural network is a fully connected multi-layer perceptron. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 16 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein correlating the set of input features to the set of output features includes determining direct connections between pairs of features from the set of input features, and the one or more sparsity constraints include a sparsity constraint restricting any input feature from being directly connected via a path through the neural network with a same input feature (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein correlating the set of input features to the set of output features includes determining direct connections between pairs of features from the set of input features, and the one or more sparsity constraints include a sparsity constraint restricting any input feature from being directly connected via a path through the neural network with a same input feature (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 17 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: modify a sparsity constraint restricting a number of paths through the neural network representing direct correlations between respective features; and reapply the neural network to learn a refined regression that correlates the set of input features to the set of output features while satisfying the modified sparsity constrain t (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: modify a sparsity constraint restricting a number of paths through the neural network representing direct correlations between respective features; and reapply the neural network to learn a refined regression that correlates the set of input features to the set of output features while satisfying the modified sparsity constrain t (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 18 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein learning the regression model includes learning a function for each feature from the set of output features by fitting a regression to the obtained data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein learning the regression model includes learning a function for each feature from the set of output features by fitting a regression to the obtained data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). Step 1: Claim 1 is a method type claim. Claim 14 is a system type claim. Claim 19 is a method claim. Therefore, claims 1-20 are directed to either a process, machine, manufacture or composition of matter. With respect to claim 19: 2A Prong 1: obtaining data including a collection of samples and associated sample features (mental process – a user can obtain a collection of samples); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features while satisfying one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). causing a presentation of the undirected graph to be displayed via a graphical user interface of a computing device ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features while satisfying one or more sparsity constraints (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). causing a presentation of the undirected graph to be displayed via a graphical user interface of a computing device ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). With respect to claim 20 : 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the neural network is a fully connected multi-layer perceptron, and wherein the one or more sparsity constraints include a sparsity constraint restricting a number of direct correlations that gives direct dependencies between the respective features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the neural network is a fully connected multi-layer perceptron, and wherein the one or more sparsity constraints include a sparsity constraint restricting a number of direct correlations that gives direct dependencies between the respective features (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)). Claim Objections 5. Claims 3, 4, 8, 9, 10, 11 and 13, 15, 17, 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claim s 1, 2, 5-7, 12, 14, 16, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Egedal ( US 20210363969 ) and further in view of Ghosh ( US 20200387148 ) in further view of Chauhan ( US 20220156759 ). Regarding claim 1 , Egedal discloses a method for recovering undirected graphs, comprising: obtaining data including a collection of samples and associated sample features ( providing a data driven model trained with a machine learning method and stored in a database, the data driven model providing a correlation between time series data obtained of the pair of turbines in parallel, the time series data being aligned in time to the same wind front, and ratio of the current power production of the upstream and the downstream turbine related to the aligned time series data , see paragraph 0015); identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features ( a regression model, in particular a neural network or a Gaussian process, will be applied as machine learning method to obtain the data driven model. The regression model is used to predict the ratio of the current power production of the downstream and upstream turbines. The regression model implicitly models the aerodynamic behavior of the turbines and site-specific irregularities such as unusual terrain , see paragraph 0026); applying the neural network to the collection of samples and associated sample features to learn a regression model that learns dependencies between a set of input features and a set of output features ( dependencies between the upstream and the downstream turbines are modeled without any physical assumptions or numeric simulations, therefore eliminating most of computational costs. Instead, these dependencies are learnt using a machine learning method , paragraph 0017) Egedal does not disclose while satisfying one or more sparsity constraints . However, Ghosh discloses wherein i n some embodiments, the server 24 may use one or more of the methods described above to identify candidate testing parameters to remove. In addition or alternatively, the server 24 may use a principle component analysis (PCA), a regression model with a sparsity constraint (for example, least absolute shrinkage and selection operator (LASSO)), or the like to identify candidate testing parameters to remove (paragraph 0070). The combination of Egedal and Ghosh would have resulted in the data driven modeling of Egedal to be combined with Ghosh’s teachings of utilizing sparsity constraints . One would have been motivated to have combined the teachings because a user of Egedal would have found the use of sparsity constraints to assist with regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. Egedal does not disclose generating an undirected graph representative of the collection of samples indicating a set of dependencies between the sample features associated with the collection of samples . However, Chauhan discloses wherein for each pair of variables in the mixed dataset, a conditional independence test may produce an output that is indicative of dependency. Using these metrics, the intelligence platform can (i) produce an undirected graph indicating dependency among the variables and (ii) identify the neighbors of each variable (paragraph 0028). The combination of Egedal and Chauhan would have resulted in the data driven modeling of Egedal to be combined with Chauhan ’s teachings of presenting the data on a graph. One would have been motivated to have combined the teachings because a user of Egedal would have found the use of visual graphing to be an efficient way to see the result of the regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. The combination of Egedal and Ghosh would have resulted in the data driven modeling of Egedal to be combined with Ghosh’s teachings of utilizing sparsity constraints. One would have been motivated to have combined the teachings because a user of Egedal would have found the use of sparsity constraints to assist with regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. Regarding claim 15, the subject matter of the claims are substantially similar to claim 1 and as such the same rationale of rejection applies. Regarding claim 2 , Egedal disclose wherein the sample features include two or more types of data including two or more of numerical, categorical, discrete, and continuous data ( ime series data comprises storing at least one of the following information about an ambient condition, in particular wind direction, anemometer wind speed, air density, ambient temperature, and so on, the turbines' internal state, in particular the produced power, the current pitch angle, nacelle orientation, nacelle acceleration, rotor orientation, generator speed, and so on, and information about the wind field, in particular current wind speed, measures of turbulence and so on. Usually the wind field information cannot be obtained directly from sensor measurements. Instead, it has to be estimated based on the other data sources mentioned above, for example by a FFT frequency analysis of the nacelle acceleration , see at least paragraph 0022). Regarding claim 5 , Egedal discloses wherein learning dependencies between the set of input features and the set of output features includes determining direct connections between pairs of features from the set of input features ( dependencies between the upstream and the downstream turbines are modeled without any physical assumptions or numeric simulations, therefore eliminating most of computational costs. Instead, these dependencies are learnt using a machine learning method , paragraph 0017). Regarding claim 6 , Egedal does not disclose wherein the one or more sparsity constraints include a sparsity constraint restricting any input feature from being directly connected via a path through the neural network with a same input feature. However, Ghosh discloses wherein i n some embodiments, the server 24 may use one or more of the methods described above to identify candidate testing parameters to remove. In addition or alternatively, the server 24 may use a principle component analysis (PCA), a regression model with a sparsity constraint (for example, least absolute shrinkage and selection operator (LASSO)), or the like to identify candidate testing parameters to remove (paragraph 0070). The combination of Egedal and Ghosh would have resulted in the data driven modeling of Egedal to be combined with Ghosh’s teachings of utilizing sparsity constraints. One would have been motivated to have combined the teachings because a user of Egedal would have found the use of sparsity constraints to assist with regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. Regarding claim 7, Egedal does not disclose wherein the one or more sparsity constraints include a sparsity constraint restricting a number of paths that give direct dependencies between the respective features. However, Ghosh discloses wherein i n some embodiments, the server 24 may use one or more of the methods described above to identify candidate testing parameters to remove. In addition or alternatively, the server 24 may use a principle component analysis (PCA), a regression model with a sparsity constraint (for example, least absolute shrinkage and selection operator (LASSO)), or the like to identify candidate testing parameters to remove (paragraph 0070). The combination of Egedal and Ghosh would have resulted in the data driven modeling of Egedal to be combined with Ghosh’s teachings of utilizing sparsity constraints. One would have been motivated to have combined the teachings because a user of Egedal would have found the use of sparsity constraints to assist with regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. Regarding claim 12 , Egedal discloses wherein learning the regression model includes learning a function for each feature from the set of output features by fitting a regression to the obtained data ( a regression model, in particular a neural network or a Gaussian process, will be applied as machine learning method to obtain the data driven model. The regression model is used to predict the ratio of the current power production of the downstream and upstream turbines. The regression model implicitly models the aerodynamic behavior of the turbines and site-specific irregularities such as unusual terrain , see paragraph 0026). Regarding claim 18, the subject matter of the claims are substantially similar to claim 12 and as such the same rationale of rejection applies. Regarding claim 16 , Egedal discloses wherein correlating the set of input features to the set of output features includes determining direct connections between pairs of features from the set of input features, and the one or more sparsity constraints include a sparsity constraint restricting any input feature from being directly connected via a path through the neural network with a same input feature ( dependencies between the upstream and the downstream turbines are modeled without any physical assumptions or numeric simulations, therefore eliminating most of computational costs. Instead, these dependencies are learnt using a machine learning method , paragraph 0017). Regarding claim 19 , Egedal discloses a method for recovering undirected graphs, comprising: obtaining data including a collection of samples and associated sample features ( providing a data driven model trained with a machine learning method and stored in a database, the data driven model providing a correlation between time series data obtained of the pair of turbines in parallel, the time series data being aligned in time to the same wind front, and ratio of the current power production of the upstream and the downstream turbine related to the aligned time series data, see paragraph 0015); identifying a neural network configured to receive input features of a given set of samples and learn a regression for the input features (a regression model, in particular a neural network or a Gaussian process, will be applied as machine learning method to obtain the data driven model. The regression model is used to predict the ratio of the current power production of the downstream and upstream turbines. The regression model implicitly models the aerodynamic behavior of the turbines and site-specific irregularities such as unusual terrain, see paragraph 0026); applying the neural network to the collection of samples and associated sample features to learn a regression model that correlates a set of input features to a set of output features (dependencies between the upstream and the downstream turbines are modeled without any physical assumptions or numeric simulations, therefore eliminating most of computational costs. Instead, these dependencies are learnt using a machine learning method, paragraph 0017) E gedal does not disclose while satisfying one or more sparsity constraints . However, Ghosh discloses wherein in some embodiments, the server 24 may use one or more of the methods described above to identify candidate testing parameters to remove. In addition or alternatively, the server 24 may use a principle component analysis (PCA), a regression model with a sparsity constraint (for example, least absolute shrinkage and selection operator (LASSO)), or the like to identify candidate testing parameters to remove (paragraph 0070) . The combination of Egedal and Ghosh would have resulted in the data driven modeling of Egedal to be combined with Ghosh’s teachings of utilizing sparsity constraints. One would have been motivated to have combined the teachings because a user of Egedal would have found the use of sparsity constraints to assist with regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. Egedal does not disclos generating an undirected graph representative of the collection of samples indicating a set of correlations between the sample features associated with the collection of samples; and causing a presentation of the undirected graph to be displayed via a graphical user interface of a computing device. However, Chauhan discloses wherein for each pair of variables in the mixed dataset, a conditional independence test may produce an output that is indicative of dependency. Using these metrics, the intelligence platform can (i) produce an undirected graph indicating dependency among the variables and (ii) identify the neighbors of each variable (paragraph 0028). The combination of Egedal and Chauhan would have resulted in the data driven modeling of Egedal to be combined with Chauhan ’s teachings of presenting the data on a graph. One would have been motivated to have combined the teachings because a user of Egedal would have found the use of visual graphing to be an efficient way to see the result of the regression modeling. As such, the combination of references would have been obvious to one of ordinary skill in the art as the resulting invention would have been obvious. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DAVID E CHOI whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-3780 . The examiner can normally be reached on FILLIN "Work schedule?" \* MERGEFORMAT M-F: 7-2, 7-10 (PST) . If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bechtold, Michelle T. can be reached on (571) 431-0762 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID E CHOI/ Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 08, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602396
TRANSFORMING MODEL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12585995
Capturing Data Properties to Recommend Machine Learning Models for Datasets
2y 5m to grant Granted Mar 24, 2026
Patent 12585957
SYSTEM AND METHOD FOR EFFICIENT ESTIMATION OF CUMULATIVE DISTRIBUTION FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12580878
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR PRESENTING SESSION MESSAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12572836
INTELLIGENT PROVISIONING OF QUANTUM PROGRAMS TO QUANTUM HARDWARE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+12.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month