Prosecution Insights
Last updated: April 19, 2026
Application No. 17/507,649

Identifying Test Dependencies Using Binary Neural Networks

Non-Final OA §101§103
Filed
Oct 21, 2021
Examiner
KAPOOR, DEVAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
EMC Ip Holding Company LLC
OA Round
3 (Non-Final)
11%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
1 granted / 9 resolved
-43.9% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the application filed on 10/13/2025. Claims 1-18,20 and 22 are pending and have been examined. This action is Non-final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/13/2025 has been entered. Response to Arguments Argument 1: The applicant argues that claims 1-20 are patent-eligible under 35 U.S.C. 101 and are not directed to an abstract idea. First, the applicant contends that the claims recite limitations that cannot practically be performed in the human mind, emphasizing that determining software test dependencies using a neural network across multiple combinations of test results would be unduly burdensome for a human, even if theoretically possible with pen and paper. Second, the applicant asserts that the claims are directed to a technical improvement in computer functionality and software testing, rather than to a mere result, because they recite a specific implementation involving a neural network configured to determine test dependencies based on correlation information. In particular, the applicant highlights the amended limitations requiring a trinary neural network with weights drawn from a discrete set of three values indicating positive correlation, negative correlation, or no correlation, which the applicant argues improves computational efficiency and aligns with the binary nature of software test outcomes. The applicant further maintains that the claims integrate any alleged abstract idea into a practical application by using the neural network to automatically identify dependent tests and reduce unnecessary test execution. Finally, the applicant relies on recent USPTO guidance and memorandum language stating that, in “close call” situations, a 101 rejection should only be maintained when ineligibility is established by a preponderance of the evidence, and argues that such a showing has not been made here. Accordingly, the applicant requests withdrawal of the 101 rejection for the independent claims and their dependents. Examiner Response to Argument 1: The examiner has considered the arguments set forth above but is not persuaded that the claims are patent eligible under 35 U.S.C. 101. As set forth in the current eligibility analysis, the claims recite indicating correlations between outputs of a neural network and determining whether tests pass or fail based on those correlations, identifying dependencies between tests based on observed results, applying sets of inputs, comparing values, and storing indications, which collectively amount to evaluation, observation, judgment, and mathematical relationships and therefore constitute mental processes and mathematical concepts under Step 2A Prong 1. The applicant’s argument that the claimed subject matter cannot practically be performed in the human mind is not persuasive, as the test is whether the claims are directed to such concepts, not whether performance would be efficient or scalable. The applicant’s assertion that the claims recite a technical improvement in computer functionality is also not persuasive because the claims do not improve the operation of a computer or neural network itself, but instead use a neural network as a tool to perform abstract data analysis on software test results. Further, the amended limitations on claims 1,8, and 15 directed to discrete correlation values merely specify a particular mathematical model for representing correlations and do not integrate the judicial exception into a practical application or provide significantly more, as such limitations amount to mathematical constraints or field of use limitations under Step 2A Prong 2 and Step 2B. The claims also recite data gathering, transmitting, and storing of results, which are insignificant extra solution activities and well understood, routine, and conventional computer functions that do not amount to significantly more than the abstract idea. Finally, the applicant’s reliance on guidance regarding close call situations is not persuasive because, for the reasons discussed in the eligibility analysis, the record establishes by a preponderance of the evidence that the claims are directed to an abstract idea and do not recite additional elements sufficient to transform the nature of the claim into patent eligible subject matter. Accordingly, the rejection under 35 U.S.C. 101 is maintained for claims 1-18, 20, and 22. Argument 2: The applicant argues that the 103 rejections should be withdrawn because the cited combination of Poornaki and Vanmali allegedly fails to teach or suggest the amended limitations, particularly those directed to a trinary neural network and discrete three-valued weights indicating positive correlation, negative correlation, or no correlation. The applicant contends that Poornaki does not disclose or relate to software test dependency analysis using a neural network in the manner claimed, and therefore cannot supply the missing features. With respect to Vanmali, the applicant asserts that although Vanmali discloses neural networks used in software testing, it relies on continuous-valued synaptic weights, not weights restricted to a discrete set of three outputs, and thus does not teach the claimed trinary weight scheme. The applicant further argues that the examiner’s interpretation equating Vanmali’s weight ranges (including values around zero) with a discrete “zero” output improperly conflates continuous weight values with explicitly discretized outputs. According to the applicant, this distinction is material because the amended claims require a specific architectural constraint that purportedly improves efficiency and aligns with binary test outcomes. The applicant also argues that the Office has not provided an adequate motivation to modify the cited references to arrive at the claimed trinary neural network, and that the secondary references applied to the dependent claims do not cure the alleged deficiencies of the primary combination. Accordingly, the applicant maintains that the prior art does not render amended claim 1 obvious, and that claims 8 and 15, being similarly amended and analogous, should likewise be withdrawn from rejection. Examiner Response to Argument 2: The examiner has considered the applicant’s arguments but is not persuaded that the rejection under 35 U.S.C. 103 should be withdrawn. As set forth in the rejection for claim 1, Poornaki teaches a system applying inputs to a predictive model to determine failure conditions ([Poornaki, [0008-0009]), and Vanmali teaches generating and using a neural network for software testing to determine whether tests pass or fail based on input test results ([Vanmali, page 46, 48, 53, sec 1,2). Vanmali further discloses synaptic weights that take on positive values, negative values, and values within a range that includes zero ([Vanmali, pages 48 and 53]), which the examiner interpreted in the mapping as representing positive correlation, negative correlation, and no correlation, as recited by the amended claims. In addition, Vanmali teaches comparing neural network outputs with tested application outputs and categorizing results using a comparison tool and Table I ([Vanmali, page 52, sec 3 and Table I]), supporting the examiner’s interpretation that Vanmali teaches discrete outcome states relevant to dependency analysis. The rejection also provides a motivation to combine, namely that a person of ordinary skill in the art would have been motivated to modify the system of Poornaki to include Vanmali’s neural network based software testing approach in order to efficiently model software behavior and determine test failures using a trained neural network, as suggested by Vanmali ([Vanmali, page 46]). Because claims 8 and 15 recite the same amended limitations as claim 1 in different statutory forms, they are unpatentable for the same reasons. Accordingly, the rejection under 35 U.S.C. 103 is maintained. See below, the mapping, for further details, Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18,20 and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1:The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. Step 2A Prong 1:(a) “one output of the discrete set of three outputs is a positive number, which indicates a positive correlation between a respective output of a respective node of the neural network and the output of the neural network, another output of the discrete set of three outputs is a negative number, which indicates a negative correlation between the respective output and the output, and a third output of the discrete set of three outputs is zero, which indicates no correlation between the respective output and the output; … applying sets of inputs to the neural network, respective inputs of the sets of inputs identifying whether the respective tests pass or fail; and in response to determining that a first set of inputs of the sets of inputs to the neural network results in a failure output” -- This limitation is directed to indicating correlations between outputs of the neural network and determining dependency based on the observed results of test outcomes. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgment, including with pen and paper, to determine correlations and classifications. Therefore, the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B:(a) “A system, comprising: at least one processor; and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations, comprising: applying sets of inputs to the neural network” -- The limitation recites instructions executed by a generic processor to apply inputs, which amounts to no more than instructions to apply the abstract idea on a computer. The limitation does not integrate the judicial exception into a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(f)). (b) “generating a neural network, wherein an output of the neural network indicates whether a first test of a computer code will pass given an input of respective results of whether respective tests, of a group of tests of the computer code, pass” -- This limitation recites the gathering and manipulation of input data to produce output data. Such activity is considered insignificant extra-solution activity and does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, generating and transmitting data for further processing is a well-understood, routine, and conventional activity and does not provide significantly more (see MPEP 2106.05(d)(II)). (c) “wherein the neural network comprises a trinary neural network, and wherein respective weights of the neural network indicate one of a discrete set of three outputs” -- This limitation merely limits the abstract idea to a particular type of neural network and representation of correlation values. Such limitation amounts to a field-of-use or mathematical model restriction and does not integrate the abstract idea into a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(h)). (d) “storing an indication that the first test is dependent on a subset of the respective tests indicated as failing by the first set of inputs” — This limitation recites storing and organizing test data. The act of gathering and storing data is considered an insignificant extra-solution activity that does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)). Furthermore, storing data constitutes electronic recordkeeping, which is a well-understood, routine, and conventional activity and does not provide significantly more (see MPEP 2106.05(d)(II)).Thus, claim 1 is non-patent eligible. Claim 8 and 15 are analogous to claim 1 (aside from system claim vs method claim), and thus it will face the same rejection that was set forth above. Regarding claim 2, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The system of claim 1, wherein the output is a first output, and wherein the neural network comprises a number of outputs that corresponds to a number of tests of the group of tests.” - The limitation recites mere further limitations regarding the output, and further recites what the neural network first introduced in claim 1 will include. The limitation cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEР 2106.05(h)). Thus, claim 2 is non-patent eligible. Regarding claim 3, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The system of claim 2, wherein the sets of inputs mask a value of a first input to the neural network, the first input identifying whether the first test passes or fails.”-The limitation recites that the sets of inputs first introduced in earlier claims will further include masking a value of the first input of the neural network that can identify if the first tests passes or fails. The limitation is merely limiting the sets of inputs to not include an indication that the test passes or fails by masking a value, and it cannot integrate to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 3 is non-patent eligible. Regarding claim 4, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. Step 2A Prong 1: (a) “The system of claim3, wherein the first input is configured to indicate one of the first test passing, the first test failing, and no indication of whether the first test passes or fails, and wherein the value of the first test is masked to provide no indication of whether the first test passes or fails.” -The limitation is directed to indicating if a test has passed or failed and if a value will be set to indicate if a test has passed or failed. The act of indicating if a testing of data/code has passed or failed, and setting a value to rescind the indication is a process that can be performed in the human mind, with aid of pen and paper, and thus the limitation is directed to a mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 4 is non-patent eligible. Regarding claim 5, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The system of claim 1, wherein the neural network is a first neural network, wherein the sets of inputs are first sets of inputs, and wherein the operations further comprise: applying second sets of inputs to a second neural network to determine whether a second test is dependent on a respective second set of the second sets of inputs.” - The limitation recites mere instructions to apply sets of inputs onto the neural network for the purpose of determining if a second test is dependent on a second set of inputs. The limitation cannot amount to a practical application, nor can it provide significantly more than the judicial exception (see MPEР 2106.05(f)). Thus, claim 5 is non-patent eligible. Regarding claim 6, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The system of claim 5, wherein the indication is a first indication, wherein the first sets of inputs comprises a second indication of whether the second test passes, and wherein the second sets of inputs comprises a third indication of whether the first test passes.” - The limitation is directed to merely further limiting the indications and tests that were introduced in earlier claims to more than one indication or test run, and same for the second set of inputs. This cannot amount to a practical application, nor can it provide significantly more than the judicial exception (see MPРЕР 2106.05(h)). Thus, claim 6 is non-patent eligible. Regarding claim 7, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The system of claim 1, wherein the indication is a first indication, wherein the subset is a first subset, and wherein the operations further comprise: in response to determining that a second set of inputs of the sets of inputs to the neural network results in the failure output, storing a second indication that the first test is dependent on a second subset of the respective tests indicated as failing by the second set of inputs.” - The limitation recites the same limitation that was first recited in claim 1 to further include a “first” indication, a “first” subset, determining a “second” set of inputs, and storing another indication that the first test is depend on another subset of other tests that were indicated as failing. The limitation does not amount to any more than further limiting elements of claim 1 to a particular field of use/environment, and thus it cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05 (h)). Thus, claim 7 is non-patent eligible. Regarding claim 9, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The method of claim 8, wherein the respective weights of the neural network indicate one of the positive correlation, the negative correlation, or no correlation between the respective outputs of the respective nodes of the neural network and the output of the neural network.” -The limitation recites similar limitations that were already introduced and recited in claim 1. The limitation is recited in a high level of generality, merely limiting the respective weights of the neural network to further includes type of correlation between outputs, thus it cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 9 is non-patent eligible. Regarding claim 10, Step 1: The claim is directed to a method, which is considered to be in the category of process. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: (a) “The method of claim 8, further comprising: after determining that the first set of inputs to the neural network results in the failure, and in response to determining that a second set of inputs of the sets of inputs comprises a superset of the first set of inputs, determining, by the system, to omit applying the second set of inputs to the neural network.” - The limitation recites that the second set of inputs will further comprise/include a “superset” of the first set of inputs and instead letting the system determine to not apply the second set onto the neural network. The limitation does not amount to more than further limiting the input sets to a field of use/environment, and thus it cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 10 is non-patent eligible. Regarding claim 11, Step 1: The claim is directed to a method, which is considered to be in the category of process. The claim satisfies step 1. Step 2A Prong 1: (a) “The method of claim 8, wherein the respective inputs of the sets of inputs have an upper limit of inputs of the respective inputs that indicate failed tests, and wherein the upper limit is less than a number of tests in the group of tests.” - The limitation is directed to the respective inputs will have a predetermined set upper limit, that will be compared to a number of tests from the group/plurality of tests. Setting an upper limit and comparing it to gathered data is a process that can be performed in the human mind using evaluation, operation, and judgement, and thus it is directed to a mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 11 is non-patent eligible. Regarding claim 12, Step 1: The claim is directed to a method, which is considered to be in the category of process. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The method of claim 11, wherein a number of inputs of the respective inputs that indicate failed tests vary between 1 and the upper limit.” - The limitation recites that the number of inputs of respective inputs first introduced in earlier claims will further be limited to values that will be between 1 and the upper limit, which merely limits the limitation to a particular value (environment of finite values), and thus it cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 12 is non-patent eligible. Regarding claim 13, Step 1: The claim is directed to a method, which is considered to be in the category of process. The claim satisfies step 1. Step 2A Prong 1: (a) “The method of claim 11, wherein the upper limit is proportionate to a logarithm of a number of the respective tests.”- The limitation is directed to the upper limit to be relative to a logarithmic number of tests. The limitation is directed to calculating a logarithmic number of tests, which would involve mathematical calculation/operation to do. Thus the limitation is directed to math. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 13 is non-patent eligible. Regarding claim 14, Step 1: The claim is directed to a method, which is considered to be in the category of process. The claim satisfies step 1. Step 2A Prong 1: (a) “The method of claim 8, wherein a first size of data that represents a first weight of the respective weights is smaller than a second size of word size of the processor,”-The limitation is directed to a size value corresponds to a weight value that has been determined to be smaller than a second word size related to the processor. Determining a weight to be smaller than another can be performed in the human mind using evaluation, observation, and judgement, with aid of pen and paper. Thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: (a) “and wherein multiple weights of the respective weights are combined into a first word of the processor and processed in parallel.” - The limitation recites merely multiple weight values to be combined to a word representation related to the processor and will be applied to the computer by processing in parallel. The act of combining data to be processed on a computer is recited in a high level of generality, and it cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 14 is non-patent eligible. Regarding claim 16, Step 1: The claim is directed to a non-transitory computer-readable medium, which is directed to the category an article of manufacture. The claim satisfies step 1. Step 2A Prong 1: “testing the computer code with the respective tests; and in response to determining that a second test of the subset of the respective tests fails, and to determining that the first test depends on the subset of the respective tests, determining to omit testing the computer code with the first test.” - The limitation is directed to determining a subset of failed tests, and if the first tests depends on a subset of respective tests in all to determine if computer should be tested with the first test. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, and thus it is directed to a mental process. Step 2A Prong 2 and Step 2B: “after storing the indication” - This limitation, like claims above, the act of storing indications of gathered data is an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of storing data is directed to electronic recordkeeping, and its considered a well-understood, routine, and conventional activity that cannot provide significantly more than the judicial exception (see MРЕР 2106.05(d)(II)). Thus, claim 16 is non-patent eligible. Regarding claim 17, Step 1: The claim is directed to a non-transitory computer-readable medium, which is directed to the category an article of manufacture. The claim satisfies step 1. Step 2A Prong 1: (a) “The non-transitory computer-readable medium of claim 15, wherein a dimensionality of the output of the neural network is equal to one.” - The limitation is directed to setting the dimensionality value of the output for the neural network to be set to the value of one. Setting a mathematical value to a finite number, 1,can be done in the human mind and/or by using pen and paper to determine the dimensionality and set it to 1, thus the limitation is directed to a mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 17 is non-patent eligible. Regarding claim 18, Step 1: The claim is directed to a non-transitory computer-readable medium, which is directed to the category an article of manufacture. The claim satisfies step 1. Step 2A Prong 1: “The non-transitory computer-readable medium of claim 15, wherein a dimensionality of the respective inputs of the neural network is equal to a number of the respective tests that are separate from the first test.” - The limitation is directed to the dimensionality of the neural network to be equal to the number of tests separate from the first test. A human mind is capable of determining/setting the dimensionality of the neural network to have the same value of the number of tests that have occurred using evaluation, observation, and judgement, with aid of using pen and paper to calculate the dimensionality. Thus, the limitation is directed to a mental process. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 18 is non-patent eligible. Regarding claim 20, Step 1: The claim is directed to a non-transitory computer-readable medium, which is directed to the category an article of manufacture. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The non-transitory computer-readable medium of claim 15, wherein a number inputs of the first input is equal to a number of tests of the respective tests that are separate from the first test.” - The limitation is directed to the number of inputs of a first input to be equal to a number of tests out of the respective tests for which are separate from the first test. The limitation does not amount to more than merely limiting the number of the inputs to a particular field of use/environment of being equal to a number of tests, and it cannot be integrated to a practical application, nor can it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 20 is non-patent eligible. Regarding claim 22, Step 1: The claim is directed to a system, which is considered to be in the category of machine. The claim satisfies step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 1, wherein the three outputs of the discrete set of three outputs are -1, 0, and 1.” -- The limitation recites that the three outputs of the discrete set of three outputs are between -1 and 1. The limitation amounts to no more than merely further limiting the claim to a field of use/environment, and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 22 is non-patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,7,8-10. 15 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Poornaki et. al., US20200210824A1 “Scalable system and method for forecasting wind turbine failure with varying lead time Windows” (referred herein as Poornaki) in view of NPL reference “Using a Neural Network in the Software Testing Process” by Vanmali et. al. (referred herein as Vanmali). Regarding claim 1, Poornaki teaches a system, comprising: at least one processor; and at least one memory that stores executable instructions that, when executed by the at least one processor, facilitate performance of operations, comprising … applying sets of inputs to the neural network, respective inputs of the sets of inputs identifying whether the respective tests pass or fail ([Poornaki, [0008-0009] “An example system may comprise at least one processor and memory containing instructions, the instructions being executable by the at least one processor to… applying the first selected failure prediction model to the current sensor data to generate a first failure prediction… comparing the first failure prediction to a trigger criteria…”, wherein the examiner interprets “applying the first selected failure prediction model to the current sensor data to generate a first failure prediction… comparing the first failure prediction to a trigger criteria” to be the same as “applying sets of inputs to the neural network, respective inputs… identifying whether the respective tests pass or fail”, because they are both directed to providing a set of input values to a predictive model for classification of a condition based on the model’s output.) Poornaki does not teach generating a neural network, wherein an output of the neural network indicates whether a first test of a computer code will pass given an input of respective results of whether respective tests, of a group of tests of the computer code, pass”, nor does Poornaki explicitly teach “wherein the neural network comprises a trinary neural network...wherein respective weights of the neural network indicate one of a discrete set of three outputs.” Vanmali teaches: generating a neural network, wherein an output of the neural network indicates whether a first test of a computer code will pass given an input of respective results of whether respective tests, of a group of tests of the computer code, pass ([Vanmali, page 46, 48, 53, sec 1,2] “A multi-layer neural network is trained on the original software application by using randomly generated test data that conform to the specification…the trained neural network is used to produce a particular output when presented with an input signal…the synaptic weights of the network…,” wherein the examiner interprets “a positive weight represents a positive correlation and a negative weight reflects a negative correlation” to be the same as “the respective outputs of the three outputs indicate a positive correlation…and a negative correlation” because they are both directed to representing the direction of influence between a node’s output and the network output through the sign of the weight. The disclosure of initializing synaptic weights in a range from (-0.5 to 0.5) is interpreted to be the same as including a zero weight, which represents no correlation, because they are both directed to representing the absence of influence between a node’s output and the network output.) wherein respective weights of the neural network indicate one of a discrete set of three outputs”, and wherein one output of the discrete set of three outputs is a positive number, which indicates a positive correlation between a respective output of a respective node of the neural network and the output of the neural network, another output of the discrete set of three outputs is a negative number, which indicates a negative correlation between the respective output and the output, and a third output of the discrete set of three outputs is zero, which indicates no correlation between the respective output and the output” (([Vanmali, page 46, 48, 53, sec 1,2] “A multi-layer neural network is trained on the original software application by using randomly generated test data that conform to the specification…the trained neural network is used to produce a particular output when presented with an input signal…the synaptic weights of the network…” AND [Vanmali, page 51-52] “The comparison tool is employed as an independent method of comparing the results from the neural network and the results of the- tested versions of the credit app-roval application.”, and Table I: PNG media_image1.png 164 563 media_image1.png Greyscale The tool uses the output of a neural network and the output of the tested application. The distance between the outputs is taken as the absolute difference between the value of the winning node for each output and the corresponding value in the application”, wherein the examiner interprets “a positive weight represents a positive correlation while a negative weight reflects a negative correlation” to be the same as one output of the discrete set of three outputs is a positive number, which indicates a positive correlation between a respective output of a respective node of the neural network and the output of the neural network, and another output of the discrete set of three outputs is a negative number, which indicates a negative correlation between the respective output and the output, as both are directed to representing the direction of correlation between a respective node output of the neural network and the output of the neural network using signed weight values, and Table I, which defines output categories including true positive, true negative, false positive, and false negative, to be the same as a third output of the discrete set of three outputs is zero, which indicates no correlation between the respective output and the output, as both are directed to an output state in which there is no correspondence between the neural network output and the tested application output.) in response to determining that a first set of inputs of the sets of inputs to the neural network results in a failure output, storing an indication that the first test is dependent on a subset of the respective tests indicated as failing by the first set of inputs” ([Vanmali, page 52, sec 3], “the output of the tested application is wrong, the evaluation of the comparison tool is classified as being a true negative or a category of 2, i.e., the determination that the output of the application is an actual error.”, wherein the examiner interprets “the determination that the output of the application is an actual error” to be the same as “storing an indication… indicated as failing” because they are both directed to recording a classification or flag that a specific test has failed based on the neural network’s evaluation of a given set of inputs.) Poornaki, Vanmali, and the instant application are analogous art, because they are both directed to neural network–based analysis of test or sensor data to detect failures or errors. It would have been obvious to a person having ordinary skill in the art, prior to the effective filing date of the claimed invention, to modify the neural network system of Poornaki to include the artificial neural network–based software testing model of Vanmali in order to efficiently model software behavior and determine test failures using a trained neural network, as suggested by Vanmali ([Vanmali, page 46, 48, 53, sec 1,2] “A multi-layer neural network is trained on the original software application by using randomly generated test data that conform to the specification…the trained neural network is used to produce a particular output when presented with an input signal…the synaptic weights of the network…”). Claims 8 and 15 are analogous to claim 1, and thus would face the same rejection set forth above. Regarding claim 7, Poornaki and Vanmali teaches The system of claim 1, (see rejection of claim 1). Vanmali further teaches wherein the indication is a first indication, wherein the subset is a first subset, and wherein the operations further comprise: in response to determining that a second set of inputs of the sets of inputs to the neural network results in the failure output, storing a second indication that the first test is dependent on a second subset of the respective tests indicated as failing by the second set of inputs. [Vanmali, page 52, sec. 3] “the tested application itself may produce errors, which is the main reason for the testing process. If the ANN output is correct while the output of the tested application is wrong, the evaluation of the comparison tool is classified as being a true negative or a category of 2, i.e., the determination that the output of the application is an actual error.” wherein the examiner interprets “the teste application itself may produce errors” to be the same as a “first indication... first subset “and “ category of 2, i.e. the determination that the output of the application is an actual error” to be the same as storing an indication that a failure output from a second set of inputs is linked to a second subset of tests.) Poornaki, Vanmali, and the instant application are analogous art because they are all directed to a system for evaluating neural network outputs in relation to application performance errors. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method for assessing neural network performance disclosed by Poornaki to include the “evaluation of the comparison tool is classified as being a true negative or a category of 2, i.e., the determination that the output of the application is an actual error” disclosed by Vanmali. One would be motivated to do so to efficiently improve the robustness of error detection in neural network-based systems, as suggested by Vanmali (Vanmali, [page 52, sec. 3] “the evaluation of the comparison tool is classified as being a true negative or a category of 2, i.e., the determination that the output of the application is an actual error.”) Regarding claim 9, Poornaki and Vanmali teaches The method of claim 8, (claim 8 is analogous to claim 1; see claim 1). Vanmali further teaches wherein the respective weights of the neural network indicate one of the positive correlation, the negative correlation, or no correlation between the respective outputs of the respective nodes of the neural network and the output of the neural network. ([Vanmali, page 47, sec 2], “Each neuron in the network is used to perform calculations that contribute to the overall learning process, or training, of the network. The neuron interconnections are associated with synaptic weights that store the information computed during the training of the network. The neural network is thus a massive parallel information processing system that utilizes distributed control to learn and store knowledge about its environment”, wherein the examiner interprets “the synaptic weights” to be the same as “respective weights of the neural network,” as they both refer to the weights used in the neural network model and the sign of the weight can be interpreted as reflecting the direction of the relationship between an input and the output, where a positive weight represents a positive correlation and a negative weight reflects a negative correlation.) Poornaki, Vanmali, and the instant application are analogous art, because they are all directed to neural networks for evaluating test data to determine pass/fail outcomes and storing indications of test dependencies. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify method of claim 8 disclosed by Poornaki and Vanmali to include the “trained neural network is used to produce a particular output when presented with an input signal” disclosed by Vanmali. One would be motivated to do so to effectively determine test outcomes based on simulated model outputs without executing the full underlying application, as suggested by Vanmali ([Vanmali, page 47, sec 2] “The neuron interconnections are associated with synaptic weights that store the information computed during the training of the network. The neural network is thus a massive parallel information processing system that utilizes distributed control to learn and store knowledge about its environment”). Regarding claim 10, Poornaki and Vanmali teaches The method of claim 8, (claim 8 is analogous to claim 1; see claim 1). Vanmali further teaches further comprising: after determining that the first set of inputs to the neural network results in the failure, and in response to determining that a second set of inputs of the sets of inputs, ([Vanmali, page 52, sec. 3] “the tested application itself may produce errors, which is the main reason for the testing process. If the ANN output is correct while the output of the tested application is wrong, the evaluation of the comparison tool is classified as being a true negative or a category of 2, i.e., the determination that the output of the application is an actual error.” wherein the examiner interprets the “the output of the tested application is wrong” to be the same as the step of determining a failure from the first set of neural network inputs and then providing a second set of inputs because the network was tested on an initial set then another set is determined.) comprises a superset of the first set of inputs, determining, by the system, to omit applying the second set of inputs to the neural network. ([Vanmali, page 46, sec 1], “Third, saving an exhaustive set of test cases with the outputs of the original version may be infeasible for real-world applications.” wherein the examiner interprets “exhaustive set of test cases” to be the same as a “superset of the first set of inputs” and “saving … test cases…may be infeasible” to be the same as omit applying another set of input data due to storage constraints or for any other possible reason.) Poornaki, Vanmali, and the instant application are analogous art because they are all directed to neural network-based testing methods that determine when additional testing inputs can be omitted to reduce unnecessary execution. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify method of claim 8 disclosed by Poornaki and Vanmali to include the “saving an exhaustive set of test cases with the outputs of the original version may be infeasible for real-world applications” disclosed by Vanmali. One would be motivated to do so to efficiently conserve computational and storage resources by avoiding redundant test execution, as suggested by Vanmali ([Vanmali, page 46, sec 1] “saving an exhaustive set of test cases with the outputs of the original version may be infeasible for real-world applications.”) Regarding claim 22, Poornaki and Vanmali teaches The system of claim 1, (see rejection of claim 1). Vanmali further teaches wherein the three outputs of the discrete set of three outputs are -1, 0, and 1; ([Vanmali, page 46] “A multi-layer neural network is trained on the original software application by using randomly generated test data that conform to the specification,” [Vanmali, page 58, sec 4.2] “The initial synaptic weights of the neural network were obtained randomly and covered a range between -0.5 and 0.5,” and [Vanmali, page 51, sec 3] “The comparison tool is employed as an independent method of comparing the results from the neural network and the results of the tested versions of the credit approval application,” and Table I on page 52, wherein the examiner interprets the disclosure of synaptic weights having positive values, negative values, and values within a range (for example range said in Vanmali of -0.5 to 0.5) that includes zero to be the same as the three outputs of the discrete set of three outputs are -1, 0, and 1, as both are directed to representing three distinct correlation states between a respective output of a respective node of the neural network and the output of the neural network using signed numerical values, and wherein the examiner interprets the categorical output comparison shown in Table I to be the same as the discrete set of three outputs, as both are directed to classifying outcomes into mutually exclusive states corresponding to agreement, disagreement, or absence of correspondence between outputs.) Poornaki and Vanmali, and the instant application are analogous art because they are all directed to using neural network-based models to analyze and evaluate outcomes in order to identify failure conditions and relationships within a system. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system claim 1 disclosed by Poornaki and Vanmali to include the representation of correlation states using signed numerical values as disclosed by Vanmali, which employs neural network models with synaptic weights that take on positive values, negative values, and values within a range that includes zero. One would be motivated to do so to efficiently represent and interpret correlation states within the neural network without changing the underlying operation of the system, as suggested by Vanmali ([Vanmali, page 46] “A multi-layer neural network is trained on the original software application by using randomly generated test data that conform to the specification.”). Claims 2,3, and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Poornaki in view of Vanmali in further view of Fong et. al., US10838848B2 “System and method for test generation” (referred herein as Fong). Regarding claim 2, Poornaki and Vanmali teaches The system of claim 1, (see rejection of claim 1). Poornaki and Vanmali does not teach wherein the output is a first output, and wherein the neural network comprises a number of outputs that corresponds to a number of tests of the group of tests. Fong teaches wherein the output is a first output, and wherein the neural network comprises a number of outputs that corresponds to a number of tests of the group of tests. ([Fong, Page 19-20, co12-3, lines 60-61,62-67, 1-3] “a set of outputs based on the training received by the neural network ... the test outputs may include, for example, a test script that can be run to conduct the test described by the tester, the test script including specific actions that need to be taken and the order in which they need to be taken. In an alternate embodiment, the outputs could be compiled binaries containing object code, which when executed by a processor, cause the processor to run the actual tests and to record the outputs (e.g., storing screenshots for verification or checking for the presence of error/success codes)”,wherein the examiner interprets the “set of outputs based on the training received by the neural network... the outputs could be compiled binaries containing object code, which when executed by a processor, cause the processor to run the actual tests and to record the outputs” to be the same as a neural network that will have a number of outputs that will correspond to a group of tests.) Poornaki, Vanmali, Fong, and the instant application are analogous art, because they are all directed to neural networks that analyze and process outputs to find errors in software based on test cases as inputs to the neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system's method of claim 1 disclosed by Poornaki and Vanmali to include the process to determine which “outputs could be compiled binaries containing object code” disclosed by Fong. One would be motivated to do so to efficiently manage and execute testcases automatically, as suggested by Fong ([Fong, Page 19-20, col2-3, lines 60-61, 62-67, 1-3]”the outputs could be compiled binaries containing object code, which when executed by a processor, cause the processor to run the actual tests and to record the outputs”). Regarding claim 3, Poornaki, Vanmali, and Fong teaches The system of claim 2, (see rejection of claim 2). Fong further teaches wherein the sets of inputs mask a value of a first input to the neural network, the first input identifying whether the first test passes or fails. ([Fong, Page 19-20, col2 and 4, lines 58-62 and 5-10] “The neural network is utilized to identify and/or classify tokenized sections of the natural language strings to produce a set of outputs based on the training received by the neural network from the legacy test automation data... After a classification is made in respect of one or more test actions to be taken, the neural network is utilized to focus in on a subset of test parameters to utilize for the test action that has been identified by the convolutional neural network.”, wherein the examiner interprets the use of sets of inputs by the neural network to determine specific parameters related to test actions, which can be analogized to masking a value of an input identifying pass/fail status.) Poornaki, Vanmali, Fong, and the instant application are analogous art, because they are all directed to neural networks that analyze and process outputs to find errors in software based on testcases as inputs to the neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify system's method of claim 2 disclosed by Poornaki and Vanmali to include the tokenization process to test using neural networks as disclosed by Fong. One would be motivated to-do so to efficiently focus on the test data to be used, as suggested by Fong ([Fong, Page 19-20, co13, lines 60-61] “the neural network is utilized to focus in on a subset of test parameters to utilize for the test action”). Regarding claim 5, Poornaki and Vanmali teaches The system of claim 1, (see rejection of claim 1). Poornaki and Vanmali does not teach wherein the neural network is a first neural network, wherein the sets of inputs are first sets of inputs, and wherein the operations further comprise: applying second sets of inputs to a second neural network to determine whether a second test is dependent on a respective second set of the second sets of inputs. Fong teaches wherein the neural network is a first neural network, wherein the sets of inputs are first sets of inputs, and wherein the operations further comprise: applying second sets of inputs to a second neural network to determine whether a second test is dependent on a respective second set of the second sets of inputs. ([Fong, Page 23, col 10, lines 36-40] “In some embodiments, the neural network can be provided the state and the action to be taken as an input to identify one or more dependencies in the test case, and to select parameters based at least on the identified one or more dependencies...”, wherein the examiner interprets both the instant application and Fong use multiple neural networks, each handling different sets of inputs and determining dependencies within test cases.) Poornaki, Vanmali, Fong, and the instant application are analogous art, because they are all directed to neural networks that analyze and process outputs to find errors in software based on test cases as inputs to the neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify system's method of claim 1 as disclosed by Poornaki and Vanmali to include the process to determine the process of providing the state and next action to the neural network disclosed by Fong. One would be motivated to do so to efficiently automatically select parameters, as suggested by Fong ([Fong, Page 23, co110, lines 36-40],”...and to select parameters based at least on the identified one or more dependencies”) Claims 4 and 6 are rejected under 35 U.S.C. 103 as being un patentable over Poornaki in view of Vanmali in view of Fong further in view of Tsoukalas et. al, US10678678B1 “Ordered test execution based on test coverage” (referred herein as Tsoukalas). Regarding claim 4, Poornaki, Vanmali, and Fong teaches The system of claim 3, (see rejection of claim 3). Poornaki, Vanmali, and Fong do not teach wherein the first input is configured to indicate one of the first test passing, the first test failing, and no indication of whether the first test passes or fails, and wherein the value of the first test is masked to provide no indication of whether the first test passes or fails. Tsoukalas teaches wherein the first input is configured to indicate one of the first test passing, the first test failing, and no indication of whether the first test passes or fails, and wherein the value of the first test is masked to provide no indication of whether the first test passes or fails. ([Tsoukalas, Page 16, col 10, lines 5-9] “The test execution may implement a functionality for success/failure assessment. Using the functionality for success/failure assessment, the test execution module may determine whether the service or program passes or fails a particular test.”, wherein the examiner interprets that both Tsoukalas and the instant application are directed to an assessment for which whether will or will not determine/indicate if a test has passed or failed.) Poornaki, Vanmali, Fong, Tsoukalas, and the instant application are analogous art because they are all directed to methods of determining software errors using test data and neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify system’s method of claim 3 disclosed by Poornaki, Vanmali, and Fong to include the process of assessing pass or failure of the test disclosed by Tsoukalas. One would be motivated to do so to efficiently us a test execution module to make the pass/fail determination, as suggested by Tsoukalas (Tsoukalas, [col 10 lines 6-9] “the test execution module may determine whether the service or program passes or fails a particular test.”) Regarding claim 6, Poornaki, Vanmali, and Fong teaches The system of claim 5, (see rejection of claim 5). Poornaki in view of Vanmali in view of Fong does not teach wherein the indication is a first indication, wherein the first sets of inputs comprises a second indication of whether the second test passes, and wherein the second sets of inputs comprises a third indication of whether the first test passes. Tsoukalas teaches wherein the indication is a first indication, wherein the first sets of inputs comprises a second indication of whether the second test passes, and wherein the second sets of inputs comprises a third indication of whether the first test passes. ([Tsoukalas, page 13, col 3-4, lines 49-53 and 66-1] “In one embodiment, a suite of tests 180 may be determined based (at least in part) on user input. For example, a developer associated with program code 170 for a software product may supply or indicate tests that she or he deems to be relevant to the software product…various heuristics may be applied to determine whether the software product passes or fails a particular test.”, wherein the examiner interprets tests being determined based on user input and determining if a software test passes or not is the same as having indications what will determine/classify if tests passes or fails.) Poornaki, Vanmali, Fong, Tsoukalas, and the instant application are analogous art because they are all directed to methods of determining software errors using test data and neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system’s method of claim 5 disclosed by Poornaki, Vanmali, and Fong to include the heuristic approach to determine pass or fail of software test disclosed by Tsoukalas. One would be motivated to do so to efficiently identify which test pass or fail using user defined software tests, as suggested by Tsoukalas (Tsoukalas, [page 13, col 3-4, lines 49-53 and 66-1] “various heuristics may be applied to determine whether the software product passes or fails a particular test.”) Claims 14, 17, and 18 rejected under 35 U.S.C. 103 as being unpatentable over Poornaki in view of Vanmali in view of Yalla. Regarding claim 14, Poornaki and Vanmali teaches The method of claim 8, (see rejection of claim 8). Poornaki and Vanmali do not teach wherein a first size of data that represents a first weight of the respective weights is smaller than a second size of word size of the processor, and wherein multiple weights of the respective weights are combined into a first word of the processor and processed in parallel. Yalla teaches wherein a first size of data that represents a first weight of the respective weights is smaller than a second size of word size of the processor, and wherein multiple weights of the respective weights are combined into a first word of the processor and processed in parallel. [Yalla, col 18, lines 22-30], “retraining the neural network model or the execution model based on the set of test cases and the one or more of the configurations, the scripts, or the test targets. Although FIG. 4 shows example blocks of process 400, in some implementations, process 400 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 4. Additionally, or alternatively, two or more of the blocks of process 400 may be performed in parallel.” wherein the examiner interprets “retraining the neural network (NN) model … based on the set of the test cases” to be the same as “first size of data that represents a first weight” since the amount of data throughput processed by the NN is limited by the network size (i.e. the weights in the input layer). The examiner further interprets “blocks of process may be performed in parallel” to be the same as “processed in parallel”. Poornaki, Vanmali, Yalla, and the instant application are analogous art because they are all directed to methods of determining software errors using test data and neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 8 disclosed by Poornaki and Vanmali to include the process flow for running the test cases in through the NN disclosed by Yalla. One would be motivated to do so to effectively determine the best approach to train the NN model on test cases as suggested by Yalla ([Yalla, col 18, lines 22-30], “in some implementation, process may include additional blocks, fewer blocks, different blocks, “). Regarding claim 17, Poornaki and Vanmali teach The non-transitory computer-readable medium of claim 15, (see rejection of claim 15). Poornaki and Vanmali do not teach wherein a dimensionality of the output of the binary neural network is equal to one. Yalla teaches wherein a dimensionality of the output of the binary neural network is equal to one. ([Yalla, col 6, lines 32-36], “In some implementations, the neural network model may include a logistic regression model with a rectified linear unit activation for intermediate layers and a sigmoid activation for a final layer. In a neural network, an activation function is responsible for transforming a summed weighted input from a node into activation of the node or output for that input” wherein the examiner interprets “a sigmoid activation for a final layer” to be the same as “the output of the binary neural network is equal to one” because the sigmoid function of a binary neural net maps its input to a value of 0 or 1.) Poornaki, Vanmali, Yalla, and the instant application are analogous art because they are all directed to an output of a neural network, e.g. a binary NN, to a have a dimensionality value of one. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of the computer-readable medium of claim 15 as disclosed by Poornaki and Vanmali to include the sigmoid activation, which would amount to a value equal to one and/or set to the value of one as further disclosed by Yalla. One would be motivated to do so to effectively output a value of a [binary] neural network to be equal to one as suggested by Yalla ([Yalla, col 6, lines 32-36], “sigmoid activation for a final layer…transforming a summed weighted input…or output for that input”). Regarding claim 18, Poornaki and Vanmali teach The non-transitory computer-readable medium of claim 15, (see rejection of claim 15). Poornaki and Vanmali do not teach wherein a dimensionality of the respective inputs of the binary neural network is equal to a number of the respective tests that are separate from the first test. Yalla teaches wherein a dimensionality of the respective inputs of the binary neural network is equal to a number of the respective tests that are separate from the first test. ([Yalla, col 8, lines 55-67 and col 9 lines 1-4], “the testing platform may process the software data, with the trained neural network model, to predict a set of test cases to execute for testing the software in the software development platform. For example, a test case may include a set of test inputs, execution conditions, expected results, and/or the like developed for a particular objective. The set of test cases may include test cases to be applied to software tested by the software development platform to detect errors or faults in a software code module (e.g., program, application, script, and/or the like) before the software code module is deployed, updated, and/or the like. The set of test cases may include test cases selected from historical sets of test cases that are provided as part of the historical test configuration data, a repository of test cases that is part of the testing platform, a repository of test cases that is part of the software development system, or the like. The set of test cases may also be generated dynamically by the testing platform” wherein the examiner interprets “set of test cases” and “test cases selected from historical sets” to be the same as “respective tests”, and “The set of test cases may include test cases to be applied to software” to be the same as “dimensionality of the respective inputs” that are separate from the first test.) Poornaki, Vanmali, Yalla, and the instant application are analogous art because they are all directed to respective inputs of the neural network to be equal to the number of respective tests. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of the computer-readable medium of claim 15 as disclosed by Poornaki and Vanmali to include the set of tests cases that are respective to historical sets, for which would be the inputs as further disclosed by Yalla. One would be motivated to do so to effectively have the same amount of respective tests to equal the number of inputs that are not included in the first set as suggested by Yalla ([Yalla, col 8, lines 55-67 and col 9 lines 1-4], “set of test cases to execute for testing the software in the software development platform. For example, a test case may include a set of test inputs…from historical sets of test cases that are provided as part of the historical test configuration data, a repository of test cases that is part of the testing platform, a repository of test cases that is part of the software development system, or the like”). Claims 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Poornaki in view of Vanmali further in view of Tsoukalas. Regarding claim 16, Poornaki and Vanmali teach The non-transitory computer-readable medium of claim 15, (see rejection of claim 15). Poornaki and Vanmali do not teach further comprising: after storing the indication, testing the computer code with the respective tests; and in response to determining that a second test of the subset of the respective tests fails, and to determining that the first test depends on the subset of the respective tests, determining to omit testing the computer code with the first test. Tsoukalas teaches further comprising: after storing the indication, testing the computer code with the respective tests; and in response to determining that a second test of the subset of the respective tests fails, and to determining that the first test depends on the subset of the respective tests, determining to omit testing the computer code with the first test. ([Tsoukalas, col 20, lines 1-5], “…moving the particular test from the suite of tests to a suite of deprecated tests, wherein the suite of deprecated tests is excluded from consideration for the subset of tests that are likely to exercise the second set of program code…”, wherein the examiner interprets “moving the particular test from the suite of tests to a suite of deprecated tests” and excluding it from consideration to be the same as “determining to omit testing the computer code with the first test,” as both actions relate to excluding or omitting specific tests based on the failure of another test.) Poornaki, Vanmali, Tsoukalas, and the instant application are analogous art because they are all directed to methods of determining software errors using test data and neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the non-transitory claim of 15 disclosed by Poornaki and Vanmali to include the test selection system disclosed by Tsoukalas. One would be motivated to do so to effectively include or omit certain tests for processing as suggested by Tsoukalas, ([Tsoukalas, col 20, lines 1-5, “wherein the suite of deprecated tests is excluded from consideration for the subset of tests that are likely to exercise the second set of program code”). Regarding claim 20, Poornaki and Vanmali teach The non-transitory computer-readable medium of claim 15, (see rejection of claim 15). Poornaki and Vanmali do not teach wherein a number inputs of the first input is equal to a number of tests of the respective tests that are separate from the first test. Tsoukalas teaches wherein a number inputs of the first input is equal to a number of tests of the respective tests that are separate from the first test. [Tsoukalas, col 5, lines 42-49], “In one embodiment, the selected subset 181 of tests are likely to be exercised (e.g., encountered, executed, or otherwise performed) by the updated program code 171. The subset 181 of the tests may be selected based (at least in part) on the mapping 130 and on the change data associated with the updated program code 171.” wherein the examiner interprets “the selected subset of tests are likely to be exercised by the updated program code” to be the same as “number inputs of the first is equal to a number of tests of the respective tests” because in both cases the final subset of tests is determined from another set of tests. Poornaki, Vanmali, Tsoukalas, and the instant application are analogous art because they are all directed to methods of determining software errors using test data and neural networks. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of the computer-readable medium of claim 15 as disclosed by Poornaki and Vanmali to include the process of selecting a subset of test data disclosed by Tsoukalas. One would be motivated to do so to effectively eliminate unnecessary test data, as suggested by Tsoukalas (Tsoukalas, [col 5, lines 42-49] “… The subset 181 of the tests may be selected based (at least in part) on the mapping 130 and on the change data associated with the updated program code 171”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVAN KAPOOR/Examiner, Art Unit 2126 /VAN C MANG/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Oct 21, 2021
Application Filed
Mar 04, 2025
Non-Final Rejection — §101, §103
May 12, 2025
Interview Requested
May 21, 2025
Applicant Interview (Telephonic)
May 21, 2025
Examiner Interview Summary
Jun 10, 2025
Response Filed
Aug 06, 2025
Final Rejection — §101, §103
Sep 29, 2025
Examiner Interview Summary
Sep 29, 2025
Applicant Interview (Telephonic)
Oct 13, 2025
Response after Non-Final Action
Nov 11, 2025
Request for Continued Examination
Nov 17, 2025
Response after Non-Final Action
Dec 23, 2025
Non-Final Rejection — §101, §103
Feb 12, 2026
Examiner Interview Summary
Feb 12, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Response Filed

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
11%
Grant Probability
28%
With Interview (+16.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month