Prosecution Insights
Last updated: April 19, 2026
Application No. 18/753,303

SEMICONDUCTOR TEST RESULT ANALYSIS DEVICE, SEMICONDUCTOR TEST RESULT ANALYSIS METHOD, AND RECORDING MEDIUM

Non-Final OA §101§102§112
Filed
Jun 25, 2024
Examiner
MONSUR, NASIMA
Art Unit
2858
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Advantest Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
461 granted / 587 resolved
+10.5% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
50 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
24.8%
-15.2% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/06/2024, 10/25/2024, 12/17/2025 and 12/29/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Status of the Claims Claims 1-7 set forth in the preliminary amendment submitted 6/25/2024 form the basis of the present examination. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claim 6 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. In this application in claim 6, the limitations, “a step of”, are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the limitation “a step of” is modified by functional language “causing to acquire, causing to acquire, causing to generate and causing to output". In the present application (PGPUB NO: US 20240393389 A1) discloses: In Paragraph [0027], “The data processor 20 includes a condition data acquirer 30, a test result acquirer 32, a decision tree generator 34, an estimator 36, a graph generator 38, and an analysis result outputter 40. Functions of the plurality of functional blocks included in the data processor 20 may be implemented in a computer program, and the computer program may be installed in a storage of the analysis device 16. A processor (CPU or the like) of the analysis device 16 may perform the functions of the plurality of functional blocks by reading and executing the computer program in a main memory.” The claim 1 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “first acquirer”, “a second acquirer”, and “a decision tree generator”, “an outputter” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. In this application in claim 1 the recited “a first acquirer” coupled with the functional language “to acquire first data”. In this application in claim 1 the recited the recited “a second acquirer” coupled with the functional language “to acquire second data”. In this application in claim 1 the recited “a decision tree generator” coupled with the functional language “to generate a decision tree”. In this application in claim 1 the recited “an outputter” coupled with the functional language “to output as an item”. All these limitations in claim 1 have no structural meaning and are considered a generic placeholder. In the present application (PGPUB NO: US 20240393389 A1) discloses: In Paragraph [0027], “The data processor 20 includes a condition data acquirer 30, a test result acquirer 32, a decision tree generator 34, an estimator 36, a graph generator 38, and an analysis result outputter 40. Functions of the plurality of functional blocks included in the data processor 20 may be implemented in a computer program, and the computer program may be installed in a storage of the analysis device 16. A processor (CPU or the like) of the analysis device 16 may perform the functions of the plurality of functional blocks by reading and executing the computer program in a main memory.” Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “an outputter structured to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree.” The meaning of the language “as an item having a large influence on the test results” is unclear. It is not clear what is the meaning of the limitation, “having a large influence on the test results”. It is not clear what is the large influence and how the large influence is calculated. It is not clear how to determine an item having a large influence on the test results, and how to determine information of a feature amount having a relatively high importance in the decision tree. It is not clear how to understand which feature amount has the high importance and how the high importance is measured and it is not clear what actually the “high importance” means. Therefore, the limitation is not clear. Clarification is required. For purposes of the present examination the limitation “an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree” is construed to mean as the result of the regression or decision tree. Clarification is required so that the scope of the claim is clear. Claims 2-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite by virtue of their dependence from claim 1. Claims 6-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention, because of the same reason as stated above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 Step Analysis 1: Statutory Category? Yes. The claim recites a semiconductor test result analysis device comprising: …….and, therefore, is an apparatus. 2A - Prong 1: Judicial Exception Recited? Yes. The claim recites the limitations of acquire first data of a plurality of items related to a test process of a plurality of semiconductor chips; …. to acquire second data indicating test results of the plurality of semiconductor chips in the test process, which describe data gathering of a test result. The claim recites a mathematical concept. Namely, acquire first data of a plurality of items related to a test process of a plurality of semiconductor chips; …. to acquire second data indicating test results of the plurality of semiconductor chips in the test process is the abstract idea. Claim recites, “to generate a decision tree with each item of the first data as a feature amount and the second data as a target value; and to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree, which indicates mental steps, under its broadest reasonable interpretation, covers performance of the limitation in the mind or by looking at the screen of an oscilloscope or a spectrum analyzer. For example, language, “generating” some date and then input in the system in the context of this claim encompasses the user manually inputting the values in the model the values by a piece of pen and paper or in a computer. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. 2A - Prong 2: Integrated into a Practical Application? No. The claim recites additional element: a first and second acquirer, an outputter. The claim has additional elements, namely a first and second acquirer just for data gathering and then outputter as the mental step and is therefore insignificant data gathering. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. There is no context given for what is being done with the information of the feature amount. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. 2B: Claim provides an Inventive Concept? No. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The claim is ineligible. Claim 6 Step Analysis 1: Statutory Category? Yes. The claim recites semiconductor test result analysis method comprising: …….and, therefore, is a method. 2A - Prong 1: Judicial Exception Recited? Yes. The claim recites the limitations of acquire first data of a plurality of items related to a test process of a plurality of semiconductor chips; …. to acquire second data indicating test results of the plurality of semiconductor chips in the test process, which describe data gathering of a test result. The claim recites a mathematical concept. Namely, acquire first data of a plurality of items related to a test process of a plurality of semiconductor chips; …. to acquire second data indicating test results of the plurality of semiconductor chips in the test process is the abstract idea. Claim recites, “to generate a decision tree with each item of the first data as a feature amount and the second data as a target value; and to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree, which indicates mental steps, under its broadest reasonable interpretation, covers performance of the limitation in the mind or by looking at the screen of an oscilloscope or a spectrum analyzer. For example, language, “generating” some date and then input in the system in the context of this claim encompasses the user manually inputting the values in the model the values by a piece of pen and paper or in a computer. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. 2A - Prong 2: Integrated into a Practical Application? No. The claim recites additional element: a computer. The claim has additional elements, namely for data gathering and then output which is the mental step and is therefore insignificant data gathering. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. There is no context given for what is being done with the information of the feature amount. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. 2B: Claim provides an Inventive Concept? No. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The claim is ineligible. Claim 7 Step Analysis 1: Statutory Category? Yes. The claim recites a non-transitory computer-readable recording medium encoded with a computer program……..and, therefore, is an apparatus. 2A - Prong 1: Judicial Exception Recited? Yes. The claim recites the limitations of acquire first data of a plurality of items related to a test process of a plurality of semiconductor chips; …. to acquire second data indicating test results of the plurality of semiconductor chips in the test process, which describe data gathering of a test result. The claim recites a mathematical concept. Namely, acquire first data of a plurality of items related to a test process of a plurality of semiconductor chips; …. to acquire second data indicating test results of the plurality of semiconductor chips in the test process is the abstract idea. Claim recites, “to generate a decision tree with each item of the first data as a feature amount and the second data as a target value; and to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree, which indicates mental steps, under its broadest reasonable interpretation, covers performance of the limitation in the mind or by looking at the screen of an oscilloscope or a spectrum analyzer. For example, language, “generating” some date and then input in the system in the context of this claim encompasses the user manually inputting the values in the model the values by a piece of pen and paper or in a computer. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. 2A - Prong 2: Integrated into a Practical Application? No. The claim recites additional element: a computer. The claim has additional elements, namely a computer just for data gathering and then output as the mental step and is therefore insignificant data gathering. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. There is no context given for what is being done with the information of the feature amount. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. 2B: Claim provides an Inventive Concept? No. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The claim is ineligible. Dependent claims 2-5 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the recited limitations, considered both individually and as an ordered combination with the claim as a whole, fail to establish to integrate the abstract idea into a practical application: For example, claim2 recites, “wherein the first data includes an item whose value is a discrete value” which is as insignificant data gathering information and therefore is an abstract idea. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. The claim is ineligible. For example, claim 3 recites, “wherein the first data includes at least one of (1) a pushing amount of a probe card into each of the semiconductor chips, (2) a jig ID, (3) an ID of an operator in charge of a test, (4) an ID of a semiconductor test device, (5) a lot ID of each of the semiconductor chips, (6) coordinates of each of the semiconductor chips on a wafer, and (7) the number of tests.” which is as insignificant data gathering information and therefore is an abstract idea. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. The claim is ineligible. For example, claim 4 recites, “wherein the second data indicates pass/fail for each category determined in advance as the test results, the decision tree generator generates a decision tree for each category, and the outputter outputs information of a feature amount having a relatively high importance in the decision tree for each category” which is which is a mental step for determining in advance as the test results, the decision tree generator generates a decision tree for each category and can be done mentally or in a generic computer component and therefore is an abstract idea. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. The claim is ineligible. For example, claim 5 recites, “wherein the outputter outputs information based on accuracy of classification by the decision tree together with the information of a feature amount having a relatively high importance in the decision tree.” which is as insignificant data gathering information and therefore is an abstract idea. The claim, as a whole, does not integrate the abstract idea to be a practical application. The claim is not specific to any practical application. The claim is ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-7 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Shirai et al. (Hereinafter, “Shirai”) in the US patent Application Publication Number US 20020170022 A1. Regarding claim 1, Shirai teaches a semiconductor test result analysis device (A yield analysis of semiconductor data will be explained as an example. Particularly, as in the process data analysis, in the case where reference data for deciding measures for improving the quality and productivity from the analysis result is to be obtained; Paragraph [0002] Line 1-5; FIG. 5 is a block diagram showing one example of the functional configuration of a data analysis apparatus, realized by the computer system having the configuration shown in FIG. 4; Paragraph [0079] Line 1-4) comprising: a first acquirer [21] (unit 21 as the first acquirer) structured to acquire first data [41] (database 41 including a plurality of original data as the first data) of a plurality of items (This data analysis apparatus has an original data group 42 comprising database 41 including a plurality of original data, as shown in FIG. 5. This database 41 is built in the memory 4 of the computer system shown in FIG. 4; Paragraph [0079] Line 4-8; a characteristic data distribution related to various measurement results and yield of chips in a wafer face and wafers in a lot; Paragraph [0004] Line 9-11) related to a test process of a plurality of semiconductor chips [Step S1 in Figure 9] (The data analysis apparatus comprises a unit 21 which quantitatively evaluates and extracts at least one data distribution characteristic existing in the original data group 42; Paragraph [0080] Line 1-4; As shown in FIG. 9, when this data analysis method is started, at first, data to be analyzed, for example, yield value and various measurement values, is selected and extracted from the original data group 42 (step S1); Paragraph [0095] Line 5-8); a second acquirer [22] (unit 22 as the second acquirer) structured to acquire second data [Data from unit 21 as the second data] indicating test results of the plurality of semiconductor chips in the test process [Step S2 in Figure 9] (A unit 22 which selects a characteristic amount to be analyzed, from the extracted at least one data distribution characteristic amount; Paragraph [0080] Line 4-6; Subsequently, processing for extracting at least one data distribution characteristic is performed with respect to the extracted data (step S2); Paragraph [0095] Line 8-11); a decision tree generator [23+24] (unit 23+24 as the decision tree generator) structured to generate a decision tree with each item of the first data as a feature amount and the second data as a target value [Step S3, S4] (a unit 23 which performs data mining by means of a regression tree analysis method or the like, by designating the data distribution characteristic amount selected as the object to be analyzed, as the target variable, to thereby extract a rule file 24 of characteristics and regularity latent in the data distribution, and an analysis tool group 27, such as a statistical analysis component 25 and a diagram creation component 26, which analyze the distribution characteristic of the original data, by using the extracted rule file 24; Paragraph [0080] Line 6-15; The above-described unit 23 which extracts the rule file 24 is to perform data mining with respect to the original data in the original data group 42, the data distribution characteristic extracted by the data distribution characteristic extraction unit 21, or the analysis result by means of the analysis tool group 27; Paragraph [0082] Line 1-6; The data distribution characteristic amount to be analyzed is then selected, and data mining such as the regression tree analysis is performed, designating it as the target variable (step S3). After the regression tree analysis has been completed with respect to all the data distribution characteristics extracted at step S2 (step S4); Paragraph [0096] Line 1-6); and an outputter [3] in Figure 4 structured to output (a display unit and a printer) (The extracted rule file 24 is stored in the memory 4, and output by the output unit 3, such as a display unit and a printer. Decision making 5 is performed based on the analysis result by means of the analysis tool group 27; Paragraph [0081] Line 4-7), as an item having a large influence on the test results (Decision making 5), information of a feature amount having a relatively high importance in the decision tree (Step [S5+S6]) (The analysis tool group 27 is to perform an analysis with respect to the original data in the original data group 42, the data distribution characteristic extracted by the data distribution characteristic extraction unit 21, or the output result of the analysis tool group 27. The analysis result by means of the analysis tool group 27 is fed back to the unit 22 which selects a data distribution characteristic amount to be analyzed and the original data group 42. The output of the data distribution characteristic extraction unit 21 is also fed back to the original data group 42; Paragraph [0082] Line 6-15; Further, it becomes possible to improve the accuracy (reliability) and the analysis efficiency of the regression tree analysis and perform more detailed analysis, by performing the regression tree analysis again, by applying grouping of explanatory variables having a high independence degree; Paragraph [0173] Line 1-6; the analysis result is output, and the engineer confirms it (step S5). Then, the engineer makes a decision based on the analysis result (step S6); Paragraph [0096] Line 6-9). Regarding claim 2, Shirai teaches a semiconductor test result analysis device, wherein the first data [41] includes an item whose value is a discrete value (The data distribution obtained by using various statistical analysis tools and table creation tools, to express the characteristic amount of the data distribution by a discrete value; Paragraph [0006] Line 1-4; Therefore, in the first embodiment, the variation pattern, such as yield value, with respect to the wafer number is used as the data distribution characteristic to perform analysis, over a plurality of lots. There is shown here an example in which multilateral analysis is performed with respect to a test substitution Nch transistor threshold voltage VT_N2 (hereinafter simply abbreviated as VT_N2), being an important electrical characteristic having a large influence on the property of the product; Paragraph [0097] Line 9-17; first data can be a wafer number which is a discrete value). Regarding claim 3, Shirai teaches a semiconductor test result analysis device, Wherein the first data includes at least one of (1) a pushing amount of a probe card into each of the semiconductor chips, (2) a jig ID, (3) an ID of an operator in charge of a test, (4) an ID of a semiconductor test device, (5) a lot ID of each of the semiconductor chips, (6) coordinates of each of the semiconductor chips on a wafer, and (7) the number of tests (Figure 11, 12, 13, 14 shows the number of tests) (FIG. 11 shows a histogram of VT_N2 data obtained from all wafers as a specific example; Paragraph [0028] Line 1-2; FIG. 12 shows a box and whisker chart in which all the VT_N2 data is displayed for each wafer number, as a specific example; Paragraph [0029] Line 1-3; FIG. 13 is a diagram showing the result of performing regression tree analysis, by designating a mean value of VT_N2 in each lot as the target variable, and an apparatus name used in each step as the explanatory variable, as a specific example; Paragraph [0030] Line 1-5; FIG. 14 is a diagram showing an example of a statistic list for evaluation with respect to the result of the regression tree analysis shown in FIG. 13; Paragraph [0031] Line 1-3; As one example of the data distribution characteristic amount, an in-lot data distribution characteristic in the yield analysis of the semiconductor data will be explained. FIG. 7 is a table showing information obtained by paying attention to a variation in attribute values of wafers. Here, the independent variable is a wafer number, and the dependent variable is the original data, such as yield, category yield or various measurement values; Paragraph [0085] Line 1-8). Regarding claim 4, Shirai teaches a semiconductor test result analysis device, wherein the second data indicates pass/fail for each category determined in advance as the test results (As a result of performing detailed investigation actually with respect to the PM2 machine, based on the above-described analysis result, it has been found that a temperature distribution difference in the furnace of the PM2 machine is larger than the PM1 machine and the PM3 machine. Further, it has been found that it is due to the deterioration of a thermocouple, and the regular check method has been optimized. From the result of a regression tree analysis performed designating a lot yield as the target variable and the used apparatus name in each step as the explanatory variable, it has not been found that the PM2 machine is a factor causing a decrease in the yield. That is to say, the factor causing a yield decrease, which does not clearly appear in the yield value, is clarified according to the method of the present invention, in which a factor causing a significant difference in the standard deviation or the like of the electrical characteristic value in a lot is analyzed. In the first embodiment, edition of stored data, execution of the regression tree analysis and quantitative evaluation of the result by means of a peculiar method are executed automatically; Paragraph [0111] Line 1-11; From the result of a regression tree analysis performed designating a lot yield as the target variable and the used apparatus name in each step as the explanatory variable, it has not been found that the PM2 machine is a factor causing a decrease in the yield which is the pass or fail of the category of the test results), the decision tree generator generates a decision tree for each category (Since it has been found that a difference in the used apparatus in ST3 affects the yield independently of other explanatory variables, the regression tree analysis is performed separately, by dividing the wafers into a wafer group by means of the apparatus group in ST3 where the yield is defective (defective wafer group, using S3M2 and S3M3), and a wafer group by means of the apparatus group in ST3 where the yield is excellent (excellent wafer group, using S3M1 and S3M4). The regression tree diagram as the result thereof is shown in FIG. 54 and FIG. 55; Paragraph [0169] Line 1-10), and the outputter outputs information of a feature amount having a relatively high importance in the decision tree for each category (FIG. 54 is a regression tree diagram showing the result of the regression tree analysis using the defective wafer group, and constituted by node n1500 to node n1506. FIG. 55 is a regression tree diagram showing the result of the regression tree analysis using the excellent wafer group, and constituted by node n1600 to node n1606; Paragraph [0170] Line 1-6; Paragraph [0171]). Regarding claim 5, Shirai teaches a semiconductor test result analysis device, wherein the outputter outputs information based on accuracy of classification by the decision tree together with the information of a feature amount having a relatively high importance in the decision tree (FIG. 54 is a regression tree diagram showing the result of the regression tree analysis using the defective wafer group, and constituted by node n1500 to node n1506. FIG. 55 is a regression tree diagram showing the result of the regression tree analysis using the excellent wafer group, and constituted by node n1600 to node n1606; Paragraph [0170] Line 1-6; Since it has been found that a difference in the used apparatus in ST3 affects the yield independently of other explanatory variables, the regression tree analysis is performed separately, by dividing the wafers into a wafer group by means of the apparatus group in ST3 where the yield is defective (defective wafer group, using S3M2 and S3M3), and a wafer group by means of the apparatus group in ST3 where the yield is excellent (excellent wafer group, using S3M1 and S3M4). The regression tree diagram as the result thereof is shown in FIG. 54 and FIG. 55; Paragraph [0169] Line 1-10; The first branch in the defective wafer group in FIG. 54 is the same with the whole wafer group in FIG. 48. It is presumed that the yield is considerably affected by a wafer extremely defective compared to other wafers, taking into consideration that the defective wafer group in the uppermost layer in the regression tree diagram in FIG. 48 is few, such as n=39, which is one factor making the analysis difficult. In the excellent wafer group in FIG. 55, it is seen that a factor which can be hardly seen due to the defective apparatus in ST3 step has been newly found; Paragraph [0171] Line 1-10). Regarding claim 6, Shirai teaches a semiconductor test result analysis method (A yield analysis of semiconductor data will be explained as an example. Particularly, as in the process data analysis, in the case where reference data for deciding measures for improving the quality and productivity from the analysis result is to be obtained; Paragraph [0002] Line 1-5; FIG. 5 is a block diagram showing one example of the functional configuration of a data analysis apparatus, realized by the computer system having the configuration shown in FIG. 4; Paragraph [0079] Line 1-4) comprising: a step of causing a computer (FIG. 4 is a diagram showing one example of a hardware configuration of a computer system used for executing a data analysis method; Paragraph [0078] Line 1-3) to acquire to acquire first data [41] (database 41 including a plurality of original data as the first data) of a plurality of items (This data analysis apparatus has an original data group 42 comprising database 41 including a plurality of original data, as shown in FIG. 5. This database 41 is built in the memory 4 of the computer system shown in FIG. 4; Paragraph [0079] Line 4-8; a characteristic data distribution related to various measurement results and yield of chips in a wafer face and wafers in a lot; Paragraph [0004] Line 9-11) related to a test process of a plurality of semiconductor chips [Step S1 in Figure 9] (The data analysis apparatus comprises a unit 21 which quantitatively evaluates and extracts at least one data distribution characteristic existing in the original data group 42; Paragraph [0080] Line 1-4; As shown in FIG. 9, when this data analysis method is started, at first, data to be analyzed, for example, yield value and various measurement values, is selected and extracted from the original data group 42 (step S1); Paragraph [0095] Line 5-8); a step of causing the computer to acquire second data [Data from unit 21 as the second data] indicating test results of the plurality of semiconductor chips in the test process [Step S2 in Figure 9] (A unit 22 which selects a characteristic amount to be analyzed, from the extracted at least one data distribution characteristic amount; Paragraph [0080] Line 4-6; Subsequently, processing for extracting at least one data distribution characteristic is performed with respect to the extracted data (step S2); Paragraph [0095] Line 8-11); a step of causing the computer to generate a decision tree with each item of the first data as a feature amount and the second data as a target value [Step S3, S4] (a unit 23 which performs data mining by means of a regression tree analysis method or the like, by designating the data distribution characteristic amount selected as the object to be analyzed, as the target variable, to thereby extract a rule file 24 of characteristics and regularity latent in the data distribution, and an analysis tool group 27, such as a statistical analysis component 25 and a diagram creation component 26, which analyze the distribution characteristic of the original data, by using the extracted rule file 24; Paragraph [0080] Line 6-15; The above-described unit 23 which extracts the rule file 24 is to perform data mining with respect to the original data in the original data group 42, the data distribution characteristic extracted by the data distribution characteristic extraction unit 21, or the analysis result by means of the analysis tool group 27; Paragraph [0082] Line 1-6; The data distribution characteristic amount to be analyzed is then selected, and data mining such as the regression tree analysis is performed, designating it as the target variable (step S3). After the regression tree analysis has been completed with respect to all the data distribution characteristics extracted at step S2 (step S4); Paragraph [0096] Line 1-6); and a step of causing the computer to output (a display unit and a printer) (The extracted rule file 24 is stored in the memory 4, and output by the output unit 3, such as a display unit and a printer. Decision making 5 is performed based on the analysis result by means of the analysis tool group 27; Paragraph [0081] Line 4-7), as an item having a large influence on the test results (Decision making 5), information of a feature amount having a relatively high importance in the decision tree (Step [S5+S6]) (The analysis tool group 27 is to perform an analysis with respect to the original data in the original data group 42, the data distribution characteristic extracted by the data distribution characteristic extraction unit 21, or the output result of the analysis tool group 27. The analysis result by means of the analysis tool group 27 is fed back to the unit 22 which selects a data distribution characteristic amount to be analyzed and the original data group 42. The output of the data distribution characteristic extraction unit 21 is also fed back to the original data group 42; Paragraph [0082] Line 6-15; Further, it becomes possible to improve the accuracy (reliability) and the analysis efficiency of the regression tree analysis and perform more detailed analysis, by performing the regression tree analysis again, by applying grouping of explanatory variables having a high independence degree; Paragraph [0173] Line 1-6; the analysis result is output, and the engineer confirms it (step S5). Then, the engineer makes a decision based on the analysis result (step S6); Paragraph [0096] Line 6-9). Regarding claim 7, Shirai teaches a non-transitory computer-readable recording medium encoded with a computer program for causing a computer to realize (a data analysis apparatus, a data analysis method, and computer products; Paragraph [0001] Line 1-2; A yield analysis of semiconductor data will be explained as an example. Particularly, as in the process data analysis, in the case where reference data for deciding measures for improving the quality and productivity from the analysis result is to be obtained; Paragraph [0002] Line 1-5; FIG. 5 is a block diagram showing one example of the functional configuration of a data analysis apparatus, realized by the computer system having the configuration shown in FIG. 4; Paragraph [0079] Line 1-4) comprising: a function of acquiring first data [41] (database 41 including a plurality of original data as the first data) of a plurality of items (This data analysis apparatus has an original data group 42 comprising database 41 including a plurality of original data, as shown in FIG. 5. This database 41 is built in the memory 4 of the computer system shown in FIG. 4; Paragraph [0079] Line 4-8; a characteristic data distribution related to various measurement results and yield of chips in a wafer face and wafers in a lot; Paragraph [0004] Line 9-11) related to a test process of a plurality of semiconductor chips [Step S1 in Figure 9] (The data analysis apparatus comprises a unit 21 which quantitatively evaluates and extracts at least one data distribution characteristic existing in the original data group 42; Paragraph [0080] Line 1-4; As shown in FIG. 9, when this data analysis method is started, at first, data to be analyzed, for example, yield value and various measurement values, is selected and extracted from the original data group 42 (step S1); Paragraph [0095] Line 5-8); a function of acquiring second data [Data from unit 21 as the second data] indicating test results of the plurality of semiconductor chips in the test process [Step S2 in Figure 9] (A unit 22 which selects a characteristic amount to be analyzed, from the extracted at least one data distribution characteristic amount; Paragraph [0080] Line 4-6; Subsequently, processing for extracting at least one data distribution characteristic is performed with respect to the extracted data (step S2); Paragraph [0095] Line 8-11); a function of generating a decision tree with each item of the first data as a feature amount and the second data as a target value [Step S3, S4] (a unit 23 which performs data mining by means of a regression tree analysis method or the like, by designating the data distribution characteristic amount selected as the object to be analyzed, as the target variable, to thereby extract a rule file 24 of characteristics and regularity latent in the data distribution, and an analysis tool group 27, such as a statistical analysis component 25 and a diagram creation component 26, which analyze the distribution characteristic of the original data, by using the extracted rule file 24; Paragraph [0080] Line 6-15; The above-described unit 23 which extracts the rule file 24 is to perform data mining with respect to the original data in the original data group 42, the data distribution characteristic extracted by the data distribution characteristic extraction unit 21, or the analysis result by means of the analysis tool group 27; Paragraph [0082] Line 1-6; The data distribution characteristic amount to be analyzed is then selected, and data mining such as the regression tree analysis is performed, designating it as the target variable (step S3). After the regression tree analysis has been completed with respect to all the data distribution characteristics extracted at step S2 (step S4); Paragraph [0096] Line 1-6); and a function of outputting [3] in Figure 4 structured to output (a display unit and a printer) (The extracted rule file 24 is stored in the memory 4, and output by the output unit 3, such as a display unit and a printer. Decision making 5 is performed based on the analysis result by means of the analysis tool group 27; Paragraph [0081] Line 4-7), as an item having a large influence on the test results (Decision making 5), information of a feature amount having a relatively high importance in the decision tree (Step [S5+S6]) (The analysis tool group 27 is to perform an analysis with respect to the original data in the original data group 42, the data distribution characteristic extracted by the data distribution characteristic extraction unit 21, or the output result of the analysis tool group 27. The analysis result by means of the analysis tool group 27 is fed back to the unit 22 which selects a data distribution characteristic amount to be analyzed and the original data group 42. The output of the data distribution characteristic extraction unit 21 is also fed back to the original data group 42; Paragraph [0082] Line 6-15; Further, it becomes possible to improve the accuracy (reliability) and the analysis efficiency of the regression tree analysis and perform more detailed analysis, by performing the regression tree analysis again, by applying grouping of explanatory variables having a high independence degree; Paragraph [0173] Line 1-6; the analysis result is output, and the engineer confirms it (step S5). Then, the engineer makes a decision based on the analysis result (step S6); Paragraph [0096] Line 6-9). Claim(s) 1-7 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Wang et al. (Hereinafter, “Wang”) in the US patent Application Publication Number US 20080281566 A1. Regarding claim 1, Wang teaches a semiconductor test result analysis device (a system and method for managing a semiconductor manufacturing process and, more particularly, to a system and method for managing yield in a semiconductor fabrication process; Paragraph [0002] Line 1-4; FIG. 1 is a block diagram illustrating an example of a yield management system 10; Paragraph [0047] Line 1-2; FIG. 2 is block diagram illustrating more details of the yield manager 30; Paragraph [0049] Line 1-2) comprising: a first acquirer [32] (data processor 32 as the first acquirer) structured to acquire first data of a plurality of items (A yield data set typically has hundreds of different variables. These variables may include both a response variable, Y, and prediction variables, X.sub.1, X.sub.2, . . . , X.sub.m, that may be of a numerical type or a categorical type; Paragraph [0055] Line 5-8) related to a test process of a plurality of semiconductor chips (In particular, the yield manager 30 may receive a data set containing various types of semiconductor process data, including continuous/numerical data, such as temperature or pressure, and categorical data, such as the lot number of the particular semiconductor device or integrated circuit; Paragraph [0049] Line 3-8; Considered in more detail, as shown in FIG. 2, the data set may be input to a data processor 32 that may optimize and validate the data and remove incomplete data records; Paragraph [0050] Line 1-3); a second acquirer [32] structured to acquire second data indicating test results of the plurality of semiconductor chips in the test process (Considered in more detail, as shown in FIG. 2, the data set may be input to a data processor 32 that may optimize and validate the data and remove incomplete data records; Paragraph [0050] Line 1-3); a decision tree generator (model builder 32 as the decision tree generator) structured to generate a decision tree with each item of the first data as a feature amount and the second data as a target value (The yield management system 10 in accordance with the various embodiments of the present invention preferably uses a decision-tree-based method to build a yield model. In particular, the method partitions a data set, D, into sub-regions. The decision tree structure may be a hierarchical way to describe a partition of D. It is constructed by successively splitting nodes (as described below), starting with the root node (D), until some stopping criteria are met and the node is declared a terminal node. For each terminal node, a value or a class is assigned to all the cases within the node; Paragraph [0081] Line 1-10; The output from the data processor 32 may be fed into a model builder 34, so that a model of the data set may be automatically generated by the yield manager 30. Once the model builder 34 has generated a model, the user may preferably enter model modifications into the model builder to modify the model based on, for example, past experience with the particular data set; Paragraph [0050] Line 4-10); and an outputter (tool library 36 as the outputter) structured to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree (Once any user modifications have been incorporated into the model, a final model is output and is preferably made available to a statistical tool library 36. The library 36 may contain one or more different statistical tools that may be used to analyze the final model. The output of the yield manager 30 may be, for example, a listing of one or more factors/parameters that contributed to the yield of the devices that generated the data set being analyzed. As described above, the yield manager 30 is able to simultaneously identify multiple yield factors; Paragraph [0050] Line 10-19). Regarding claim 2, Wang teaches a semiconductor test result analysis device, wherein the first data includes an item whose value is a discrete value (FIG. 2 is block diagram illustrating more details of the yield manager 30 in accordance with one embodiment of the present invention. In particular, the yield manager 30 may receive a data set containing various types of semiconductor process data, including continuous/numerical data, such as temperature or pressure, and categorical data, such as the lot number of the particular semiconductor device or integrated circuit; Paragraph [0049] Line 1-8; On the other hand, a variable is a categorical type variable if its values are of a set of finite elements not necessarily having any natural ordering. For example, a categorical variable may take values in a set of {MachineA, MachineB, or MachineC} or values of (Lot1, Lot2, or Lot3); Paragraph [0055] Line 11-16; {MachineA, MachineB, or MachineC} or values of (Lot1, Lot2, or Lot3)-The values are discrete values). Regarding claim 3, Wang teaches a semiconductor test result analysis device, wherein the first data includes at least one of (1) a pushing amount of a probe card into each of the semiconductor chips, (2) a jig ID, (3) an ID of an operator in charge of a test, (4) an ID of a semiconductor test device, (5) a lot ID of each of the semiconductor chips, (6) coordinates of each of the semiconductor chips on a wafer, and (7) the number of tests (On the other hand, a variable is a categorical type variable if its values are of a set of finite elements not necessarily having any natural ordering. For example, a categorical variable may take values in a set of {MachineA, MachineB, or MachineC} or values of (Lot1, Lot2, or Lot3); Paragraph [0055] Line 11-16; FIG. 2 is block diagram illustrating more details of the yield manager 30 in accordance with one embodiment of the present invention. In particular, the yield manager 30 may receive a data set containing various types of semiconductor process data, including continuous/numerical data, such as temperature or pressure, and categorical data, such as the lot number of the particular semiconductor device or integrated circuit. The yield manager 30 may process the data set, generate a model, apply one or more statistical tools to the model and data set, and generate an output that may indicate, for example, the key factors/parameters that affected the yield of the devices that generated the current data set; Paragraph [0049] Line 1-8). Regarding claim 4, Wang teaches a semiconductor test result analysis device, wherein the second data indicates pass/fail for each category determined in advance as the test results (FIG. 5 shows an example of how missing data are preferably treated using data processing disclosed in U.S. Pat. No. 6,470,229 B1 with the MS value set to 6 and in accordance with data processing employing tiered splitting in accordance with the method of the present invention. As shown in FIG. 5, the original data set 56 contains three prediction variables (P.sub.1, P.sub.2, and P.sub.3) and one response variable (Response) and 13 cases; Paragraph [0062] Line 1-8; With the MS value set to 6, no "bad" columns appear in FIG. 5, because the parameter having the most missing values is P.sub.2, which has only 5 missing values. However, cases 2, 4, 5, 7, 9, 10, 11, 12, and 13 are "bad" rows; Paragraph [0063] Line 1-4; good or bad as the pass or fail), the decision tree generator generates a decision tree for each category (The yield management system 10 in accordance with the various embodiments of the present invention preferably uses a decision-tree-based method to build a yield model. In particular, the method partitions a data set, D, into sub-regions. The decision tree structure may be a hierarchical way to describe a partition of D. It is constructed by successively splitting nodes (as described below), starting with the root node (D), until some stopping criteria are met and the node is declared a terminal node. For each terminal node, a value or a class is assigned to all the cases within the node; Paragraph [0081] Line 1-10), and the outputter outputs information of a feature amount having a relatively high importance in the decision tree for each category (Once any user modifications have been incorporated into the model, a final model is output and is preferably made available to a statistical tool library 36. The library 36 may contain one or more different statistical tools that may be used to analyze the final model. The output of the yield manager 30 may be, for example, a listing of one or more factors/parameters that contributed to the yield of the devices that generated the data set being analyzed. As described above, the yield manager 30 is able to simultaneously identify multiple yield factors; Paragraph [0050] Line 10-19). Regarding claim 5, Wang teaches a semiconductor test result analysis device, wherein the outputter outputs information based on accuracy of classification by the decision tree together with the information of a feature amount having a relatively high importance in the decision tree (FIG. 14 shows an example of applying a range split rule. In the situation in which the best results are obtained when a parameter is in the middle of its range, the range split rule generates a more accurate model than a traditional decision tree binary split rule of the form X<a. In the example shown in FIG. 14, the split rule for the continuous variable ETEST52 is 0.8789.ltoreq.ETEST52<1.0292 and generates a range split 94. At the same time, by spanning the two extremes of the range of the variable, the range split rule enhances the significance of the variable and makes its impact easier to discern; Paragraph [0096] Line 1-11). Regarding claim 6, Wang teaches a semiconductor test result analysis method (a system and method for managing a semiconductor manufacturing process and, more particularly, to a system and method for managing yield in a semiconductor fabrication process; Paragraph [0002] Line 1-4; FIG. 1 is a block diagram illustrating an example of a yield management system 10; Paragraph [0047] Line 1-2; FIG. 2 is block diagram illustrating more details of the yield manager 30; Paragraph [0049] Line 1-2) comprising: a step of causing a computer (data processor 32) to acquire first data of a plurality of items (A yield data set typically has hundreds of different variables. These variables may include both a response variable, Y, and prediction variables, X.sub.1, X.sub.2, . . . , X.sub.m, that may be of a numerical type or a categorical type; Paragraph [0055] Line 5-8) related to a test process of a plurality of semiconductor chips (In particular, the yield manager 30 may receive a data set containing various types of semiconductor process data, including continuous/numerical data, such as temperature or pressure, and categorical data, such as the lot number of the particular semiconductor device or integrated circuit; Paragraph [0049] Line 3-8; Considered in more detail, as shown in FIG. 2, the data set may be input to a data processor 32 that may optimize and validate the data and remove incomplete data records; Paragraph [0050] Line 1-3); a step of causing the computer to acquire second data indicating test results of the plurality of semiconductor chips in the test process (Considered in more detail, as shown in FIG. 2, the data set may be input to a data processor 32 that may optimize and validate the data and remove incomplete data records; Paragraph [0050] Line 1-3); a step of causing the computer to generate a decision tree with each item of the first data as a feature amount and the second data as a target value (The yield management system 10 in accordance with the various embodiments of the present invention preferably uses a decision-tree-based method to build a yield model. In particular, the method partitions a data set, D, into sub-regions. The decision tree structure may be a hierarchical way to describe a partition of D. It is constructed by successively splitting nodes (as described below), starting with the root node (D), until some stopping criteria are met and the node is declared a terminal node. For each terminal node, a value or a class is assigned to all the cases within the node; Paragraph [0081] Line 1-10; The output from the data processor 32 may be fed into a model builder 34, so that a model of the data set may be automatically generated by the yield manager 30. Once the model builder 34 has generated a model, the user may preferably enter model modifications into the model builder to modify the model based on, for example, past experience with the particular data set; Paragraph [0050] Line 4-10); and a step of causing the computer to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree (Once any user modifications have been incorporated into the model, a final model is output and is preferably made available to a statistical tool library 36. The library 36 may contain one or more different statistical tools that may be used to analyze the final model. The output of the yield manager 30 may be, for example, a listing of one or more factors/parameters that contributed to the yield of the devices that generated the data set being analyzed. As described above, the yield manager 30 is able to simultaneously identify multiple yield factors; Paragraph [0050] Line 10-19). Regarding claim 7, Wang teaches a non-transitory computer-readable recording medium encoded with a computer program for causing a computer (data processor) to realize (a system and method for managing a semiconductor manufacturing process and, more particularly, to a system and method for managing yield in a semiconductor fabrication process; Paragraph [0002] Line 1-4; FIG. 1 is a block diagram illustrating an example of a yield management system 10; Paragraph [0047] Line 1-2; FIG. 2 is block diagram illustrating more details of the yield manager 30; Paragraph [0049] Line 1-2) a function of acquiring first data of a plurality of items (A yield data set typically has hundreds of different variables. These variables may include both a response variable, Y, and prediction variables, X.sub.1, X.sub.2, . . . , X.sub.m, that may be of a numerical type or a categorical type; Paragraph [0055] Line 5-8) related to a test process of a plurality of semiconductor chips (In particular, the yield manager 30 may receive a data set containing various types of semiconductor process data, including continuous/numerical data, such as temperature or pressure, and categorical data, such as the lot number of the particular semiconductor device or integrated circuit; Paragraph [0049] Line 3-8; Considered in more detail, as shown in FIG. 2, the data set may be input to a data processor 32 that may optimize and validate the data and remove incomplete data records; Paragraph [0050] Line 1-3); a function of acquiring second data indicating test results of the plurality of semiconductor chips in the test process (Considered in more detail, as shown in FIG. 2, the data set may be input to a data processor 32 that may optimize and validate the data and remove incomplete data records; Paragraph [0050] Line 1-3); a function of generating a decision tree with each item of the first data as a feature amount and the second data as a target value (The yield management system 10 in accordance with the various embodiments of the present invention preferably uses a decision-tree-based method to build a yield model. In particular, the method partitions a data set, D, into sub-regions. The decision tree structure may be a hierarchical way to describe a partition of D. It is constructed by successively splitting nodes (as described below), starting with the root node (D), until some stopping criteria are met and the node is declared a terminal node. For each terminal node, a value or a class is assigned to all the cases within the node; Paragraph [0081] Line 1-10; The output from the data processor 32 may be fed into a model builder 34, so that a model of the data set may be automatically generated by the yield manager 30. Once the model builder 34 has generated a model, the user may preferably enter model modifications into the model builder to modify the model based on, for example, past experience with the particular data set; Paragraph [0050] Line 4-10); and a function of outputting, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree (Once any user modifications have been incorporated into the model, a final model is output and is preferably made available to a statistical tool library 36. The library 36 may contain one or more different statistical tools that may be used to analyze the final model. The output of the yield manager 30 may be, for example, a listing of one or more factors/parameters that contributed to the yield of the devices that generated the data set being analyzed. As described above, the yield manager 30 is able to simultaneously identify multiple yield factors; Paragraph [0050] Line 10-19). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: LAI et al. (US 20090306925 A1) discloses, “SYSTEMS AND METHODS FOR TESTING INTEGRATED CIRCUIT DEVICES-[0010] Systems and methods for testing integrated circuit devices. [0020] Reference is made to FIG. 1, in which a known test system 100 for performing functional testing of integrated circuit devices under test 102 is illustrated. The test system 100 comprises a processor 104 coupled to one or more testing modules 106 via a communications channel 108. [0021] The processor 104 generates test data, which is transmitted to the testing modules 106 using the communications channel 108. Where the integrated circuit devices under test 102 are memory devices or memory modules, the test data may comprise test vector patterns to be used in testing the storage elements of the integrated circuit devices under test 102. [0028] Each testing module 106 comprises one or more integrated circuit devices under test 102, a reference integrated circuit device 110 and a comparator 112. In the example shown in FIG. 1, each testing module 106 comprises two integrated circuit devices under test 102. The integrated circuit devices under test 102 may comprise memory devices, memory modules comprising more than one memory device, application-specific integrated circuit (ASIC) devices, ASIC modules comprising more than one ASIC device, or other integrated circuit devices. In a given test system, all of the integrated circuit devices under test 102 may be of the same type. The reference integrated circuit devices 110 are typically also of the same type as the integrated circuit devices under test 102, and will typically have already been thoroughly tested and confirmed as "good" devices. [0033] Reference is now made to FIG. 2, in which a known test system 200 for performing application-specific testing of integrated circuit devices under test 202 (e.g. memory devices) is illustrated. Test system 200 comprises an application system 204 coupled to one or more testing modules 206. In the example shown in FIG. 2, the test system 200 comprises two testing modules 206-However Lai does not disclose a decision tree generator structured to generate a decision tree with each item of the first data as a feature amount and the second data as a target value; and an outputter structured to output, as an item having a large influence on the test results, information of a feature amount having a relatively high importance in the decision tree.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIMA MONSUR whose telephone number is (571)272-8497. The examiner can normally be reached 10:00 am-6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eman Alkafawi can be reached at (571) 272-4448. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NASIMA MONSUR/Primary Examiner, Art Unit 2858
Read full office action

Prosecution Timeline

Jun 25, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601786
Intelligent MV and HV Circuit Breaker Testing & Diagnosing Unit
2y 5m to grant Granted Apr 14, 2026
Patent 12591009
DETECTING THE OPEN OR CLOSED STATE OF A CIRCUIT BREAKER
2y 5m to grant Granted Mar 31, 2026
Patent 12584770
H-BRIDGE PUSH-PULL EXCITATION CIRCUIT FOR A TRANSFORMER-BASED MEASURING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12584876
Continuous Whole-Home Water Quality Analyzer
2y 5m to grant Granted Mar 24, 2026
Patent 12578375
ARC FAULT DETECTION USING CURRENT SIGNAL DEMODULATION, OUTLIER ELIMINATION, AND AUTOCORRELATION ENERGY THRESHOLDS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+26.4%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month