Prosecution Insights
Last updated: April 19, 2026
Application No. 17/836,102

ANN-BASED PROGRAM TESTING METHOD, TESTING SYSTEM AND APPLICATION

Non-Final OA §103§112§DP§Other
Filed
Jun 09, 2022
Examiner
CHOWDHURY, INDRANIL
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Zunyi Vocational And Technical College
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
130 granted / 145 resolved
+34.7% vs TC avg
Moderate +15% lift
Without
With
+14.7%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
19 currently pending
Career history
164
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
23.1%
-16.9% vs TC avg
§102
23.0%
-17.0% vs TC avg
§112
29.3%
-10.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 145 resolved cases

Office Action

§103 §112 §DP §Other
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-9 are pending for examination. Claim 1 is an independent claim. This Office Action is Non-Final. Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/09//2022 is in compliance with the provisions of 37 CFR 1.97, 37 CFR 1.98, and MPEP § 609. The Information Disclosure Statement has been placed in the application file and the information referred to therein has been considered as to the merits. Drawing Objections The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference characters not mentioned in the description: Fig. 3, 102, 103 and 104. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: Par. [56] reads: [56] Although the embodiment uses the BP ANN model to construct the test model, but it is not limited to using only the type of the ANN model to construct the test model. Please change to: [56] Although the embodiment uses the BP ANN model to construct the test model, Pars. [60]-[61] appears to include an inappropriate paragraph break, please correct. [60] A state change of the corresponding position may be identified through computer programming so as to complete automatic identification. An automatic identification [61] method includes: Par. [63] recites: [63] S2: If a label is greater than or equal to a distributed number, label distribution is conducted according to Table 1 and 0 is returned to; and if not, 1 is returned to. Please change to: [63] S2: If a label is greater than or equal to a distributed number, label distribution is conducted according to Table 1 and 0 is returned. Appropriate correction is required. Claim Interpretation Claim 1 recites, in relevant part: S1, routine testing: … S2, obtaining a test model: … S3, selecting test output values: … S4, selecting actual output values: … S5, screening the input values: … This language does not appear to recite any particular limitation to the respective steps and will be treated as a step label or name. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes multiple claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are recited in claim 9 with functional language italicized and generic placeholder and linking phrase in bold for claim 9: 9. A testing system provided by the ANN-based program testing method according to claim 1, comprising: a program basic testing unit configured to preliminarily test a function of a target program and whether the program itself is wrong; an ANN trainer configured to store various training models, and obtain a convergent the ANN model by means of training samples; a curve display that is in signal connection with the ANN trainer and configured to display a convergent state of a model curve in real time; a memory configured to store the target program, a test model itself and data generated in a running process; a processor configured to run the target program and the test model; and a readable storage medium configured to transfer data in all devices. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The portions of the specification that describe the corresponding structure that performs the claimed functions for the claims above are Fig. 3, paragraphs [87]-[92]. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-7, 9 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of U.S. Patent No. 12,468,950 (reference patent). Although the Instant Application claims 1-7, 9 and Patent No. 12,468,950 claims 1-3 at issue are not identical, they are not patentably distinct from each other because, as shown in the table below, Instant Application claims 1-7, 9 are anticipated by Patent No. 12,468,950 claims 1-3. Instant Application 17/836,102 U.S. Patent No. 12,468,950 1. An artificial neural network (ANN)-based program testing method, comprising: S1, routine testing: conducting code and function implementation tests on a target program, so as to ensure normal running of the target program; S2, obtaining a test model: using an ANN trainer to construct an initial model of an ANN, using input values and corresponding output values of the target program as training samples, training the initial model by means of the training samples, stopping adding training samples when the initial model is in a convergent state, and defining a convergent model as the test model; S3, selecting test output values: taking all the input values under the condition that an input value range of the target program is less than one million, if not, taking and inputting at least one million random input values of the target program into the test model, and computing the test output values according to the test model; S4, selecting actual output values: taking and inputting the same input values in S3 into the target program, so as to obtain the actual output values; S5, screening the input values: comparing the test output value and the actual output value that correspond to the same input value; sorting deviations from largest to smallest under the condition that there is a deviation between the test output value and the actual output value, and selecting input values corresponding to top-ranked 50 to 150 deviations; and obtaining a test value for testing whether the target program is correct; S6, storing a selected test value in a readable storage medium for later copying; and S7, transmitting the test value in S6 to the target program for running, and comparing running results with an actual functional requirement; determining that the target program has a defect under the condition that one of the running results does not satisfy the actual functional requirement; and if not, determining that the target program is a program satisfying a requirement. 1. An artificial neural network (ANN)-based program testing method, comprising: S1: … conducting code and function implementation tests on a target program …; S2: …using an ANN trainer to construct an initial test model of an ANN wherein the input values and corresponding output values of the selected function of the target program are used as training samples, wherein the initial model is trained by means of adding training samples until the initial model is in a convergent state, and defining the resulting convergent model as the test model ANN; S3: taking all possible combinations of input values, if the total number of possible combinations of input values of the target program is less than one million, or at least one million input values, if the total number of possible combinations of input values of the selected function of the target program is greater than one million, and computing output values using the test model ANN; S4: inputting the test input values taken in S3 into the selected function of the target program, so as to obtain actual output values; S5: comparing a test model ANN output value and an actual output value that correspond to the same input value for each input value to determine deviations; sorting deviations from largest to smallest selecting input values corresponding 50 to 150 of the largest deviations; thus obtaining selected test values for testing whether the selected function of the target program is correct; S6: storing the selected test values in a readable storage medium for later copying; and S7: transmitting the test values obtained in S6 to the selected function of the target program for running, and comparing running results with an actual functional requirement of the selected function of the target program; determining that the selected function of the target program has a defect under the condition that one of the running results does not satisfy the actual functional requirement of the selected function of the target program; and if not, determining that the selected function of the target program is a program satisfying a requirement of the selected function of the target program; 2. The ANN-based program testing method according to claim 1, wherein in S2, the initial model of the ANN is a back propagation (BP) neural network model and comprises an input layer, a hidden layer and an output layer; the input values of the target program are used as the input layer; the output values obtained by running the input values through the target program are used as the output layer; and the number of nodes in each of the input layer and the output layer is 32. 1. An artificial neural network (ANN)-based program testing method, comprising: … wherein in S2, the initial model of the ANN is a back propagation (BP) neural network model and comprises an input layer, a hidden layer and an output layer; the input values of the selected function of the target program are used as the input layer; the output values obtained by running the input values through the selected function of the target program are used as the output layer; and the number of nodes in each of the input layer and the output layer is 32; … 3. The ANN-based program testing method according to claim 2, wherein in a training process, the BP neural network model uses an activation function: f⁡(x)=1/1+e-x; when f(x) is less than 0.5, an output value of an output layer node is 0; and when f(x) is greater than or equal to 0.5, the output value of the output layer node is 1. 1. An artificial neural network (ANN)-based program testing method, comprising: … wherein in, the training process, the BP neural network model uses an activation function f⁡(x)=1/1+e-x; when f(x) is less than 0.5, an output value of an output layer node is 0; and when f(x) is greater than or equal to 0.5, the output value of the output layer node is 1; … 4. The ANN-based program testing method according to claim 3, wherein an output value of the hidden layer of the BP neural network model satisfies: hm=f⁡(.Summation.j=1…32 ij⁢w⁡(j,m))⁢(m=1,2,…,k), wherein w(j,m) indicates a weight from an input layer node to a hidden layer node, and k is the number of nodes in the hidden layer. 1. An artificial neural network (ANN)-based program testing method, comprising: … wherein an output value of the hidden layer of the BP neural network satisfies: hm=f⁡(.Summation.j=1…32 ij⁢w⁡(j,m))⁢(m=1,2,…,k), … w(j,m) indicates a weight from the input layer node to a hidden layer node, and k is the number of nodes in the hidden layer; …. 5. The ANN-based program testing method according to claim 4, wherein an output value of the output layer of the BP neural network model satisfies: on′=f⁡(.Summation.m=1…k hm⁢w′(m,n))⁢(n=1,2,…,32), wherein w′(m,n) indicates a weight from a hidden layer node to an output layer node. 1. An artificial neural network (ANN)-based program testing method, comprising: … wherein an output value of the output layer of the BP neural network model satisfies: on′=f⁡(.Summation.m=1…k hm⁢w′(m,n))⁢(n=1,2,…,32), wherein w’(m,n) indicates a weight for a hidden layer node to an output layer node; …. 6. The ANN-based program testing method according to claim 5, wherein in S5, the deviation satisfies d=.Summation.n=1…32 |O’n-On|, wherein On indicates an output value of function implementation. 1. An artificial neural network (ANN)-based program testing method, comprising: … wherein in S5, the deviation satisfies d=.Math.n=132.Math.o′n-on.Math., wherein On indicates an output value of function implementation, …. 7. The ANN-based program testing method according to claim 1, wherein in S2, in a process of training the BP neural network model, curve changes are observed by means of simulation software. 2. The ANN-based program testing method according to claim 1, wherein in S2, in a process of training the BP neural network model, curve changes are observed by means of simulation software. 9. A testing system provided by the ANN-based program testing method according to claim 1, comprising: a program basic testing unit configured to preliminarily test a function of a target program and whether the program itself is wrong; an ANN trainer configured to store various training models, and obtain a convergent the ANN model by means of training samples; a curve display that is in signal connection with the ANN trainer and configured to display a convergent state of a model curve in real time; a memory configured to store the target program, a test model itself and data generated in a running process; a processor configured to run the target program and the test model; and a readable storage medium configured to transfer data in all devices. 3. A testing system provided by the ANN-based program testing method according to claim 1 comprising: a program basic testing unit configured to preliminarily test a function of a target program and whether the program itself is wrong; an ANN trainer configured to store various training models, and obtain a convergence of the ANN model by means of training samples; a curve display that is in signal connection with the ANN trainer and configured to display a convergent state of a model curve in real time; a memory configured to store the target program, a test model itself and data generated in a running process; a processor configured to run the target program and the test model; and a readable storage medium configured to transfer data in all devices. With regards to double patenting of claims 1-7, 9 of the Instant Application, claims 1-3 of U.S. Patent No. 12,468,950 is in essence a “species” of the generic invention of Instant Application claims 1-7, 9. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993) and MPEP 806.04(i). Claim Objections Claim 9 is objected to because of the following informalities: Claim 9 recites “obtain a convergent the ANN model” that is grammatically incorrect and unclear. Please correct to “obtain a convergence of the ANN model”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 3-5 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 3 recites “an activation function: ƒ x = 1 1 + e - γ “. Neither the originally filed claims nor the specification describe and disclose what the terms " e " or “ γ " represent in paragraph . Accordingly, the description in the specification and claims do not reasonably convey to one skilled in the relevant art that the inventor at the time the application was filed had possession of the claimed invention. Claim 4 recites “the BP neural network model satisfies: h m = f ( ∑ j = 1 32 i j w ( j , m ) ) ( m = 1,2 ,   … , k ) ”. Neither the originally filed claims nor the specification describe and disclose what the term “ i ” represents in paragraph . Accordingly, the description in the specification and claims do not reasonably convey to one skilled in the relevant art that the inventor at the time the application was filed had possession of the claimed invention. Claim 5 recites “the BP neural network model satisfies: O n = ƒ ( ∑ m = 1 k h m w ' ( m , n ) ) ( n = 1,2 ,   … , 32 ) ”. Neither the originally filed claims nor the specification describe and disclose what the term “ h ” represents in paragraph . Accordingly, the description in the specification and claims do not reasonably convey to one skilled in the relevant art that the inventor at the time the application was filed had possession of the claimed invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites: S2, … stopping adding training samples when the initial model is in a convergent state, and defining a convergent model as the test model; S3, selecting test output values: taking all the input values under the condition that an input value range of the target program is less than one million, if not, taking and inputting at least one million random input values of the target program into the test model, and computing the test output values according to the test model; This language appears to describe two distinct ‘stopping points’, the first when the model reaches a convergent state, and the second at one million samples. Accordingly, it would not have been clear to those of ordinary skill in the art when to stop training the model. Claim 1, lines 10-11 further recites “under the condition that an input value range of the target program is less than one million”. The “input value range” language could be reasonably understood to describe a “range” of possible “values” the target program is configured to accept as “input”. Examiner interprets this as intended to describe a total number of training samples and not the value of an input variable. For the remainder of this Office Action, the Examiner will interpret this limitation as given above but the Examiner requests clarification from Applicant. Claim 1, line 19 further recites “selecting input values corresponding to top-ranked 50 to 150 deviations”. Here the term “top” could be understood to indicate either the smallest deviations or the largest. For the remainder of this Office Action, the Examiner will interpret this limitation “top-ranked 50 to 150 deviations” as the smallest deviations. Claim 1, line 21 further recites “storing a selected test value”. The term “selected test value” lacks antecedent basis. It is not clear from this language how this “test value” is selected and if the “test value” is an “input value”, an “output value” or some other value? For the remainder of this Office Action, the Examiner will interpret this limitation to refer to an “input value” corresponding to one of the “top ranked 50 to 150 deviations”. Claims 2-9 depend on claim 1 and inherits the deficiencies of claim 1. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements. Claim 8 recites … wherein under the condition that a range of the input values in the training samples in S2 is less than one million, all the input values are used as the training samples, and if not, one million input values are randomly selected as the training samples. This appears to describe limitations which are duplicative of the limitations recited in claim 1 step S3. Accordingly, a person of ordinary skill in the art would not have been reasonably apprised of the intended meaning of the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2 and 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Vanmali et al., (NPL-“Using a Neural Network in the Software Testing Process”), hereinafter Vanmali in view of Dassa et al., (U.S. Patent Publication No. 2019/0221001), hereinafter Dassa in view of Ross, (U.S. Patent Publication No. 2017/0082988), hereinafter Ross. Regarding claim 1, Vanmali teaches an artificial neural network (ANN)-based program testing method (Vanmali, title and Abstract), comprising: S1, routine testing: conducting code and function implementation tests on a target program, so as to ensure normal running of the target program (Vanmali, Fig. 5, pg. 51, 1st partial par. “the original version of the tested program”); S2, obtaining a test model: using an ANN trainer to construct an initial model of an ANN, using input values and corresponding output values of the target program as training samples, training the initial model by means of the training samples (Vanmali, pg. 46, 4th complete par. “trained on the original software application”, pg. 48 1st par. “two vectors, one of the inputs and the second for the outputs”), stopping adding training samples when the initial model is in a convergent state, and defining a convergent model as the test model (Vanmali, pg. 50, 1st par. “error convergence”. See also Fig. 5 and explanation in Section 3, 1st paragraph.); S3, selecting test output values: taking all the input values and inputting random input values of the target program into the test model, and computing the test output values according to the test model (Vanmali, Fig. 5, par. bridging pp. 50 and 51 “each training example is generated randomly … generates a corresponding output vector”. Fig. 6 and explanation on page 51 teaches that these randomly generated training examples are input to trained NN that generates program output); S4, selecting actual output values: taking and inputting the same input values in S3 into the target program, so as to obtain the actual output values (Vanmali, Fig. 6, and pg. 51, 1st full par. Set of test cases input to original Tested program that may contain faults and obtaining program output); S5, screening the input values: comparing the test output value and the actual output value that correspond to the same input value and obtaining a test value for testing whether the target program is correct (Vanmali, Fig. 6, pg. 51, 1st full par. “the comparison tool calculates the absolute distance between the winning ANN output and the corresponding value of the application output”); and obtaining a test value for testing whether the target program is correct (e.g. pg. 52, 1st par. “The corresponding value of the application output”); S6, storing a selected test value in a readable storage medium for later copying (Vanmali, Fig. 6, pg. 51, 1st full par. “each test case is provided as an input vector to a new version of the tested program”, those of ordinary skill in the art would have understood this to involve storing the value”); and S7, transmitting the test value in S6 to the target program for running, and comparing running results with an actual functional requirement; determining that the target program has a defect under the condition that one of the running results does not satisfy the actual functional requirement; and if not, determining that the target program is a program satisfying a requirement (Vanmali, Fig. 1, pg. 46, 4th par. “the tested code is executed on the test data to yield outputs that are compared with those of the neural network. We assume here that the new versions do not change the existing functions, which means that the application is supposed to produce the same output for the same inputs. A comparison tool then makes the decision whether the output of the tested application is incorrect or correct based on the network activation functions.” See also Fig. 6 Comparison Tool “Decision:Output erroneous or output correct” and description of Comparison Tool on page 51). Vanmali does not explicitly disclose: S3 … taking all the input values under the condition that an input value range of the target program is less than one million, if not, taking and inputting at least one million random input values of the target program into the test model. Dassa, in the same field of endeavor of using and applying artificial neural network models, teaches taking and inputting at least one million random input values (Dassa, par. [0088] “models require at least 1.2 million images to successfully train a classifier” Paragraph 0095 teaches that images are selected randomly [i.e. random input values]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to input at least one million random input values (Dassa par. [0088] “at least 1.2 million images). Those of ordinary skill in the art would have been motivated to do so to “successfully train” the model (see e.g. Dassa par. [0088]). Vanmali and Dassa do not explicitly teach: sorting deviations from largest to smallest under the condition that there is a deviation between the test output value and the actual output value, and selecting input values corresponding to top-ranked 50 to 150 deviations. Ross, in the same field of endeavor of using and applying artificial neural network models, teaches: sorting deviations from largest to smallest under the condition that there is a deviation between the test output value and the actual output value (Ross, par. [0027] “The total deviation errors of the learning vectors are automatically sorted into the segments … starting from the maximum value”), and selecting input values corresponding to top-ranked 50 to 150 deviations (Ross, par. [0026] “subdivided into …. preferably 50, equidistant (value range) segments”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to sort the deviations and select the top 50 to 150 deviations. Those of ordinary skill in the art would have been motivated to do so as a means of finding the best results to perform the testing with (see Ross, paragraphs [0026]-[0030]). Regarding claim 2, Vanmali as modified by Dassa as modified by Ross teach all of the limitations of claim 1 as cited above and Vanmali teaches wherein in S2, the initial model of the ANN is a back propagation (BP) neural network model and comprises an input layer, a hidden layer and an output layer (Vanmali, Fig. 4, pg. 49, 1st full par. “an input layer … one or more hidden layers … an output layer … Backpropagation is the standard training method”); the input values of the target program are used as the input layer; the output values obtained by running the input values through the target program are used as the output layer; and the number of nodes in each of the input layer and the output layer is 32 (Vanmali, Fig. 4, input signals go into input layer. See pg. 50, 3rd par. “the number of input and output units are fixed according to the data used to train the network”, it would at least have been obvious to use 32 nodes when the data dictated it and would have produced only the expected results. So n and m are 32 in Fig. 4 and number of nodes xn is 32 and ym is 32). Regarding claim 7, Vanmali as modified by Dassa as modified by Ross teach all of the limitations of claim 1 as cited above and Vanmali teaches wherein in S2, in a process of training the BP neural network model, curve changes are observed by means of simulation software (Examiner under BRI interprets the limitation “curve changes are observed by means of simulation software” as software “observes” the curve changes-this can be achieved without a display. Vanmali, see Fig. 8, pg. 46, 4th par. “the trained neural network becomes a simulated model of the software application” note that this language does not in fact require a display of the curve(s) as in claim 9 instead merely requiring the changes be “observed” which can be achieved without a display.). Regarding claim 8, Vanmali as modified by Dassa as modified by Ross teach all of the limitations of claim 1 as cited above. Vanmali and Ross do not distinctly disclose wherein under the condition that a range of the input values in the training samples in S2 is less than one million, all the input values are used as the training samples, and if not, one million input values are randomly selected as the training samples. Dassa, in the same field of endeavor, teaches wherein under the condition that a range of the input values in the training samples in S2 is less than one million, all the input values are used as the training samples, and if not, one million input values are randomly selected as the training samples (Dassa par. [0088] “at least 1.2 million images”). The motivation to combine for claim 8 is the same as motivation to combine for claim 1. Claims 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Vanmali et al., (NPL-“Using a Neural Network in the Software Testing Process”), hereinafter Vanmali in view of Dassa et al., (U.S. Patent Publication No. 2019/0221001), hereinafter Dassa in view of Ross, (U.S. Patent Publication No. 2017/0082988), hereinafter Ross in view of Moskovitch et al. (U.S. Patent Publication No. 2007/0294768), hereinafter Moskovitch. Regarding claim 3, Vanmali as modified by Dassa as modified by Ross teach all of the limitations of claim 2 as cited above and Vanmali further teaches wherein in a training process, the BP neural network model uses an activation function: ƒ x = 1 1 + e - γ (Vanmali, pg. 49, equation (2) note that while the terms are different, the computation is the same). Vanmali, Dassa and Ross does not distinctly disclose when ƒ x is less than 0.5, an output value of an output layer node is 0; and when ƒ x is greater than or equal to 0.5, the output value of the output layer node is 1. Moskovitch, in the same field of endeavor, teaches when ƒ x is less than 0.5, an output value of a node is 0; and when ƒ x is greater than or equal to 0.5, the output value of the output layer node is 1 (Moskovitch, par. [0099] “rounding off the output of the hidden neurons, a binary pattern” Paragraph 0040, 0047 hidden neuron connects to output neuron and propagates the output value. Also paragraph 0050 “hidden neurons' outputs condense the information in the feature space into a smaller space, orthogonal to the feature space, in order to generate the correct outputs of the ANN model.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to round the output value of an output layer node. Those of ordinary skill in the art would have been motivated to do so because this condenses the output information into smaller space (e.g. integer vs. float; binary pattern) in order to generate correct outputs of the ANN model (see Moskovitch, paragraph [0050]). Regarding claim 4, Vanmali as modified by Dassa as modified by Ross as modified by Moskovitch teach all of the limitations of claim 3 as cited above and Vanmali teaches wherein an output value of the hidden layer of the BP neural network model satisfies: h m = f ( ∑ j = 1 32 i j w ( j , m ) ) ( m = 1,2 ,   … , k ) , wherein w j , m indicates a weight from an input layer node to a hidden layer node, and k is the number of nodes in the hidden layer (Vanmali pg. 49, equation (1)) note that while the terms are different the computation appears to be the same. Pages 48-49 describe the variables in the equation). Regarding claim 5, Vanmali as modified by Dassa as modified by Ross as modified by Moskovitch teach all of the limitations of claim 4 as cited above and Vanmali teaches wherein an output value of the output layer of the BP neural network model satisfies: O n = ƒ ( ∑ m = 1 k h m w ' ( m , n ) ) ( n = 1,2 ,   … , 32 ) , wherein w ' m , n indicates a weight from a hidden layer node to an output layer node (Vanmali pg. 49, equation (1)) note that while the terms are different the computation appears to be the same. Pages 48-49 describe the variables in the equation). Regarding claim 6, Vanmali as modified by Dassa as modified by Ross as modified by Moskovitch teach all of the limitations of claim 5 as cited above and Vanmali teaches wherein in S5, the deviation satisfies d = ∑ n = 1 32 O ' n - O n , wherein O n indicates an output value of function implementation (Vanmali pg. 51, 1st full par. “calculates the absolute distance between the winning ANN output and the corresponding value of the application output”). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Vanmali et al., (NPL-“Using a Neural Network in the Software Testing Process”), hereinafter Vanmali in view of Dassa et al., (U.S. Patent Publication No. 2019/0221001), hereinafter Dassa in view of Ross, (U.S. Patent Publication No. 2017/0082988), hereinafter Ross in view of Xia et al. (U.S. Patent No. 10,810,491), hereinafter Xia. Regarding claim 9, Vanmali as modified by Dassa as modified by Ross teach all of the limitations of claim 1 as cited above and Vanmali further teaches a testing system provided by the ANN-based program testing method (Vanmali, title and Abstract) according to claim 1 , comprising: a program basic testing unit configured to preliminarily test a function of a target program and whether the program itself is wrong (Vanmali, Fig. 5, pg. 51, 1st partial par. “the original version of the tested program”); an ANN trainer configured to store various training models, and obtain a convergent the ANN model by means of training samples (Vanmali pg. 46, 4th par. “trained on the original software application”, pg. 48 1st par. “two vectors, one of the inputs and the second for the outputs”. Pg. 50, 1st par. “error convergence”. See also Fig. 5 and explanation in Section 3, 1st paragraph.); a curve generated by an ANN trainer and indicating a convergent state of a model curve (Vanmali, Fig. 8 and description of Fig. 8 on page 58). Vanmali and Dassa and Ross do not distinctly disclose: a curve display that is in signal connection with the ANN trainer and configured to display a convergent state of a model curve in real time; a memory configured to store the target program, a test model itself and data generated in a running process; a processor configured to run the target program and the test model; and a readable storage medium configured to transfer data in all devices. Xia, in the same field of endeavor, teaches: a curve display that is in signal connection with the ANN trainer and configured to display a convergent state of a model curve in real time (Xia, col. 13, lines 38-41 “an example loss function graph which may be displayed by a visualization tool for training iterations of a machine learning model”, col. 4, lines 34-39 “the model visualizations may be presented to clients in real-time”); a memory configured to store the target program, a test model itself and data generated in a running process (Xia, Fig. 12, system memory 9020, description of Fig. 12 in cols. 20-22); a processor configured to run the target program and the test model (Xia, Fig. 12, processor 9010, description of Fig. 12 in cols. 20-22); and a readable storage medium configured to transfer data in all devices (Xia, Fig. 12, system memory 9020 and Data 9026, description of Fig. 12 in cols. 20-22). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide a curve display configured to display a convergent state of a model curve in real time, a memory, a processor and a readable storage medium. Those of ordinary skill in the art would have been motivated to do so in order to, among other things, allow a user to monitor the training progress in real-time (see e.g. Xia col. 4, lines 28-59). Conclusion The prior art made of record in Form PTO-892 and not relied upon is considered pertinent to Applicants’ disclosure. Applicants are required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicants are invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicants may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicants may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicants are reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicants to the USPTO via Internet e-mail. If such a reply is submitted by Applicants via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Any inquiry concerning this communication or earlier communications from the examiner should be directed to INDRANIL CHOWDHURY whose telephone number is (571)272-0446. The examiner can normally be reached on M-Fri 9:30-7:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached on 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /INDRANIL CHOWDHURY/Examiner, Art Unit 2114 /ASHISH THOMAS/Supervisory Patent Examiner, Art Unit 2114
Read full office action

Prosecution Timeline

Jun 09, 2022
Application Filed
Feb 12, 2026
Non-Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561216
SAFETY DEVICE AND SAFETY METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12554570
Method, Apparatus and System for Locating Fault of Server, and Computer-readable Storage Medium
2y 5m to grant Granted Feb 17, 2026
Patent 12487894
FAULT TOLERANT ARCHITECTURE
2y 5m to grant Granted Dec 02, 2025
Patent 12461835
SYSTEM AND METHOD FOR INTEGRITY MONITORING OF HETEROGENEOUS SYSTEM-ON-A-CHIP (SoC) BASED SYSTEMS
2y 5m to grant Granted Nov 04, 2025
Patent 12443159
SUPPORT DEVICE MONITORING FUNCTION BLOCKS OF USER PROGRAM, NON-TRANSITORY STORAGE MEDIUM STORING SUPPORT PROGRAM THEREON, AND CONTROL SYSTEM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.7%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 145 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month