DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of the new ground of rejection.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) Claim(s) 1, 7, 8, 11, 12 and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ikeda et al (WO 2021/181863 A1).
Claim 1: Ikeda et al disclose an evaluation apparatus comprising:
a result acquisition unit (e.g., item 305) which acquires test results of each of a first plurality of devices under test(e.g., [0029] or [1-3-3], where the result acquisition unit obtains test results from storage or directly from the test apparatus for a plurality of devices);
an estimation unit which, based on the test results of each of the first plurality of devices under test, estimates an increasing degree of failure during retesting of the first plurality of devices under test that were retested (e.g., Ikeda et al disclose a calculation unit (307) which calculates a “reproducibility” of test failures. Reproducibility is defined as the likelihood that a device which failed an initial test will also fail a retest. See Paragraph [0031]: “Reproducibility is an index indicating how often the same test result is reproduced... For example, it indicates how often a fail test result in the first test is reproduced in a retest.” A high reproducibility means the “degree of failure during retesting” is high), wherein it is determined to retest each of the plurality of devices under test if an initial test had been a failure and using a predetermined determination criterion for each of the first plurality of devices under test (e.g., Ikeda et al disclose a first determination unit (315) which decides whether to retest a device that failed an initial test based on a predetermined criterion (e.g., comparing reproducibility to a threshold). See Paragraph [0046] : “The first determination unit 315 determines whether or not to perform a retest on the DUT 100 that failed the test.” See also Paragraph [0047] : “The first determination unit 315 may use a threshold value determined by the threshold value determination unit 313 in the determination.”); and
an evaluation unit which evaluates the predetermined determination criterion based on the increasing degree of failure (e.g. Ikeda et al disclose a feedback mechanism to evaluate and update the determination criterion. The second determination unit (319) uses retest results and the accuracy of the first unit’s judgments to decide whether to retrain the model (i.e., update the criterion). See Paragraph [0088] : “The second determination unit 319 determines whether to retrain the learning model 371 using the retest results and the determination results of the first determination unit 315.” See also Paragraph [0093] : “According to the determination device 300A... the accuracy of reproducibility can be further improved.”).
As per claims 12 and 15, the claimed features are rejected similarly to claim 1 above.
Clam 7: Ikeda et al disclose a non-transitory computer readable medium recorded thereon a test interface program executed by a test apparatus which tests a current device under test, wherein the test interface program, when executed by the test apparatus, causes the test apparatus to:
in response to receiving a test instruction for instructing to conduct a test on the current device under test, invoke a test program executed by the test apparatus to instruct to conduct the test on the current device under test (e.g. Ikeda et al disclose the standard operation of the test apparatus. Figure 3 (Step S11) shows the test apparatus performing a test on DUTs. Invoking a test program in response to an instruction is inherent in the operation of any automated test equipment);
when an initial test result of the current device under test is a failure, determine whether to retest the current device under test using a predetermined determination criterion (e.g. As noted in 1[c], Ikeda et al disclose the first determination unit performing this exact function. See Paragraph [0046]); and
if it is determined not to retest the current device under test, then skip a retest of the current device under test (e.g., Ikeda’s method inherently skips retesting for devices where the criterion is not met. Figure 3, Step S21, shows the process moving to the next wafer if no retest is to be performed),
wherein the non-transitory computer readable medium causes the test apparatus to acquire test results of a plurality of devices under test that were tested prior to the current device under test (e.g., As noted in 1[a], Ikeda’s result acquisition unit acquires test results from a storage unit. These results are from devices tested prior to the current DUT (“in Ikeda,” ). See Paragraph [0030]),
wherein the non-transitory computer readable medium estimates an increasing degree of failure during retesting of the plurality of devices under test that were retested, wherein the plurality of devices under test that were retested were determined to be retested based on a predetermined determination criterion (e.g., As noted in 1[b], Ikeda’s calculation unit estimates “reproducibility” (the increasing degree of failure). See Paragraph [0031]. The phrase “determined to be retested based on a predetermined determination criterion” describes the standard operation of the first determination unit ), and
wherein the non-transitory computer readable medium evaluates the predetermined determination criterion based on the estimated increasing degree of failure (e.g., As noted in 1[d], Ikeda’s second determination unit evaluates whether to update the model/criterion based on the results (which reflect the degree of failure). See Paragraph [0086]).
Claim 8: Ikeda et al disclose non-transitory computer readable medium according to claim 7, wherein the test interface program further causes the test apparatus to: in response to a determination to retest the current device under test, invoke the test program to instruct to conduct the retest on the current device under test (e.g. Ikeda et al explicitly teaches this in Fig. 3, step S23, which is performed when the judgment at S21 is "Yes" (i.e., to retest) (¶[0066])).
Claim 11: Ikeda et al disclose the non-transitory computer readable medium according to claim 8, wherein the test interface program causes the test apparatus to: in determining whether to retest the current device under test, in response to the initial test result of the current device under test being a failure, determine whether to retest the current device under test using the predetermined determination criterion without receiving a test instruction to conduct the retest on the current device under test (e.g. This claim specifies that the determination is triggered automatically upon a failure, without receiving a new test instruction. This is exactly the process teachd in Ikeda et al. As shown in Fig. 3, after the initial test (S11) and result acquisition (S15), the "first judgment process" (S19) is performed automatically. There is no disclosure of waiting for or requiring a subsequent test instruction to begin this determination. The system is designed to proceed with the determination as the next logical step after a failure is detected (¶[0064]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically teachd as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-6, 9, 10, 13-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ikeda et al (WO 201/181863 A1).
Claim 2: Ikeda et al teach the evaluation apparatus according to claim 1, but fail to teach that the evaluation unit comprises an increasing-degree-of-failure determination unit which determines whether the increasing degree of failure is within a predetermined allowable range. However, Ikeda et al teach a second determination unit that evaluates whether certain conditions are met to decide if the learning model should be re-learned (paragraphs 0088-0091). This unit calculates values for False Negatives (FN) and True Positives (TP) and uses them in a yield condition: TP/(FN+TP) ≥ TH1. Maintaining a specific yield is the operational equivalent of keeping the "increasing degree of failure" (which is directly related to the FN count) within an allowable range. A person of ordinary skill in the art would recognize that evaluating a yield condition inherently requires determining if the failure rate is acceptable. Therefore, it would have been obvious to a POSITA, before the effective filing date of the claimed invention, to configure a system to update its decision-making model (the "determination criterion") when performance metrics (the "increasing degree of failure") fall outside a desired range, in order to maintain or improve system efficacy .
As per claims 13 and 16 the claimed features are rejected similarly to claim 2 above.
Claim 3: Ikeda et al teach the evaluation apparatus according to claim 2, but fail to teach that the evaluation unit comprises an adjustment unit which, in response to the increasing degree of failure being outside the predetermined allowable range, adjusts the predetermined determination criterion such that the increasing degree of failure becomes within the predetermined allowable range. However, Ikeda et al teach that if the conditions evaluated by the second determination unit are not met, the system triggers a re-learning of the learning model (paragraph 0092). The learning model is the core of the "predetermined determination criterion." Re-learning the model is a form of adjusting the criterion to improve future performance, which is the exact function of the claimed "adjustment unit." Therefore, it would have been obvious to a POSITA, before the effective filing date of the claimed invention, to configure a system to update its decision-making model (the "determination criterion") when performance metrics (the "increasing degree of failure") fall outside a desired range, in order to maintain or improve system efficacy .
As per claims 14 and 17, the claimed features are rejected similarly to claim 3 above.
Claim 4: Ikeda et al teach the evaluation apparatus according to claim 3, but fail to teach that the test result of each of the first plurality of devices under test comprises an item test result of each of a plurality of test items; and wherein the adjustment unit determines, as the predetermined determination criterion, among the plurality of test items, a set of at least one test item or item test result for which a device under test of the plurality of devices under test is not to be retested even if the item test result is a failure. However, Ikeda et al is fundamentally based on analyzing tests consisting of a plurality of items and using the results of those specific items to make retest decisions (paragraph 0023). Furthermore, the first determination unit makes retest decisions on an item-by-item basis. It decides not to retest a device if the "reproducibility" of a specific failed test item is higher than a threshold (paragraphs 0076-0077). This directly teaches defining a set of test items (those with high reproducibility) for which a retest is skipped. Therefore, Given that Ikeda et al already makes retest decisions per test item based on reproducibility, it would have been obvious to a POSITA, before the effective filing date of the claimed invention, to define the system's operational policy (the "determination criterion") as a set or list of such high-reproducibility items for which retesting is not performed.
As per claim 18, the claimed features are rejected similarly to claim 4 above.
Claim 5: Ikeda et al teach the evaluation apparatus according to claim 1, wherein for each of the first plurality of devices under test, the estimation unit estimates the increasing degree of failure based on a test result of the initial test, and a test result of a retest for when the test result of the initial test had been a failure. For instance, Ikeda et al explicitly and necessarily use the results of an initial test and a retest to calculate its core metrics, including the False Negative (FN) count (paragraph 0087). The FN value (the number of devices not retested that would have passed retest) is a direct measure of the "increasing degree of failure." This requires data from both the initial test and the retest.
As per claim 19, the claimed features are rejected similarly to claim 5 above.
Claim 6: Ikeda et al teach the evaluation apparatus according to claim 5, but fail to teach: a retest determination unit which, for each of a second plurality of devices under test to be tested after the first plurality of devices under test, determines whether to retest a device under test of which a test result had been a failure using the predetermined determination criterion. However, Ikeda et al describes a process where an initial set of devices is used to establish and potentially update the learning model (see the process involving the in paragraphs 0083-0085). The system then uses this model (the "determination criterion") for ongoing testing of subsequent devices to determine whether to retest them (paragraphs 0060-0068). Therefore, it would have been an obvious and conventional application of machine learning in a testing system to a POSITA, before the effective filing date of the claimed invention, to use a model trained on an initial dataset (the first plurality of devices) to make decisions on new, subsequent data (the second plurality of devices).
As per claim 20, the claimed features are rejected similarly to claim 6 above.
Claim 9: Ikeda et al teach the non-transitory computer readable medium according to claim 8, but fail to teach that the test interface program further causes the test apparatus to: in response to skipping the retest of the current device under test, respond to a source of the test instruction with a fact that the initial test result of the test on the current device under test was a failure. However, Ikeda et al teach that the test results, including pass/fail data, are extracted, stored, and supplied between the test apparatus (200) and the judgment apparatus (300) (e.g., ¶[0027], [0029], [0048]). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to communicate the final failure status back to the system or operator that initiated the test instruction, as this is a conventional and necessary function for reporting results in an automated test system.
Claim 10: Ikeda et al teach the non-transitory computer readable medium according to claim 8, but fail to teach that the test interface program causes the test apparatus to: in determining whether to retest the current device under test, when the initial test result of the current device under test was a failure, and in response to receiving a test instruction to conduct the retest on the current device under test, determine whether to retest the current device under test using the predetermined determination criterion. However, Ikeda et al describe a process flow where the initial test and the retest decision are part of a continuous process (Fig. 3). Therefore, it would have been an obvious design choice to a POSITA, before the effective filing date of the claimed invention, to architect the system such that a failure result triggers a new, internal or external, "test instruction" to initiate the determination and potential retest sequence. The core inventive concept of using a predetermined criterion (reproducibility) to decide on retesting remains disclosed by Ikeda et al.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GUERRIER MERANT whose telephone number is (571)270-1066. The examiner can normally be reached Monday-Friday 8:00 Am - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mark Featherstone can be reached at 571-270-3750. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GUERRIER MERANT/Primary Examiner, Art Unit 2111 3/12/2026