Prosecution Insights
Last updated: April 19, 2026
Application No. 17/321,263

SYSTEM AND METHOD FOR AUTOMATICALLY IDENTIFYING DEFECT-BASED TEST COVERAGE GAPS IN SEMICONDUCTOR DEVICES

Non-Final OA §103
Filed
May 14, 2021
Examiner
SULTANA, DILARA
Art Unit
2858
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Kla Corporation
OA Round
5 (Non-Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
101 granted / 125 resolved
+12.8% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
43 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 125 resolved cases

Office Action

§103
DETAILED ACTIONS Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) both are submitted on 10/02/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Response to Amendment This office action is in response to the amendments/arguments submitted by the Applicant(s) on 12/12/2025. Response to Arguments Status of the Claims Claims 1-8,11-13, 15-24, 27-29, and 31-36 are pending. Claims 1,17, and 33 are amended. Claims 9-10, 14,25-26, and 30 are cancelled. Rejections Under 35 U.S.C. §103 Applicant's arguments, see remarks pages 11-15, filed 12/12/2025. with respect to the rejection(s) of Claims under 35 U.S.C. 103 has been considered, and are moot because the amendment has necessitated a new ground of rejections. The new rejections are set forth below. Claim Rejections - 35 USC§ 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 11-26, and 28-33 are rejected under 35 U.S.C. 103 as being unpatentable over Price et al. (US 2018/0275189 A1, hereinafter Price, IDS reference) and in view of Teplinsky et al. (US 2018/0348291 A1, hereinafter Teplinsky) and in further view of Rathert et al. (US 2019/0295908 A1, hereinafter, Rathert, IDS reference) Regarding claim 1, Price teaches A system comprising: a controller communicatively coupled to one or more semiconductor fabrication subsystems and one or more test tool subsystems, the controller including one or more processors configured to execute program instructions causing the one or more processors to (Price, [0006] "The system may include one or more inspection tools configured to perform inline inspection and metrology on a plurality of wafers at a plurality of critical steps during wafer fabrication. The system may also include one or more processors in communication with the one or more inspection tools"): determine, via a characterization subsystem, a plurality of apparent killer defects on one or more semiconductor devices based on characterization measurements of the one or more semiconductor devices acquired by the one or more semiconductor fabrication subsystems, , wherein the one or more semiconductor devices include a plurality of semiconductor dies (Price, [0006], "aggregate inspection results obtained from the one or more inspection tools to obtain a plurality of aggregated inspection results for the plurality of wafers; identify one or more statistical outliers among the plurality of wafers at least partially based on the plurality of aggregated inspection results obtained for the plurality of wafers"); determine, via a testing subsystem (Price, Figure 5, inspection system 500), at least one semiconductor die of the plurality of semiconductor dies which passes at least one test of a plurality of tests based on test measurements acquired by the one or more test tool subsystems (Price, [0016] "Embodiments of the present disclosure are directed to methods and systems for inline part average testing and latent reliability defect recognition and/or detection. Latent reliability defects refer to defects present in a device from manufacturing that pass initial quality tests"); correlate, via a correlation subsystem, the characterization measurements with the test measurements to determine at least one apparent killer defect of the plurality of apparent killer defects on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Price, [0033], Figure 5, "The processors 504 may process the data received from the burn-in reliability testing tools 510 and/or field returns 512 along with the data received from the inline defect inspection tools 502 to correlate the inline inspection data with the data received from the burn-in reliability testing tools 510 and/or field returns 512. the purpose of performing this data correlation is to help identify which inspection steps, defect types, defect sizes, and/or metrology parameters are most likely to provide actionable data from which statistical outliers could be most effectively screened"); and Although Price teaches detection of outliers and location silent on identifying specific location of the killer defect. However, Teplinsky teaches, determine, via a localization subsystem, one or more gap areas on the one or more semiconductor devices for defect-based test coverage based on the at least one apparent killer defect on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Teplinsky, Figure 10A-10F. [0149], Embodiments of the present disclosure relate to Geographic Part Average Test (GPAT) and Nearest Neighbor Residual (NNR). As an example, one outlier on a given area of one wafer can be determined, for example, one bad die on wafer edge compared to the rest of wafer edge. This can also be referred to as a bad die in a good neighborhood (BDGN). by examining the system test results, particularly a subset of the test results, not only the system, but the die can be determined as an outlier [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers [0199] FIG. 10 F is a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E). report one or more reports based on the one or more gap areas in defect- based test coverage on the one or more semiconductor devices to provide a baseline of test coverage gaps across a plurality of semiconductor devices, (Teplinsky, [0243], figure 14, Identifying the common characteristic may include determining whether the data subset includes a sufficient amount of data to perform a desired analysis. The common characteristic may indicate performance higher than a baseline. Alternatively, the common characteristic may indicate performance lower than a standard. The baseline may be based on a third data subset corresponding to one or more components from the set of electronic components); wherein the one or more reports include at least one metric to adjust at least one of the one or more semiconductor fabrication subsystems or the one or more test tool subsystems to mitigate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage (Teplinsky, Figure 15, [0248], At block 1550, an outlier in the data subset may be identified. Alternatively, or additionally, the outlier may be a local outlier or a global outlier. At block 1560, the information about the outlier may be communicated. For example, the information about the outlier may be communicated to a system manufacturer of a component manufacturer. The information about the outlier may identify a board and/or a component to which the outlier corresponds) to mitigate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage, wherein the mitigate the one or more gap areas includes targeting one or more care areas (Teplinsky, [0129]. “it is possible to implement semiconductor quality solutions on incoming material, which are tuned by system performance, which can be measured at system (e.g., board) test. Additionally, it is possible to set up outlier detection to evaluate only the tests that impact the system performance. Re-binning or holding of wafers/lots before assembly can be implemented”, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of improvements in yield/quality, and increasing in testing efficiency. (Teplinsky, [0037]). Both Price and Teplinsky teaches identifying outlier defects. Price and Teplinsky are silent of wherein the characterization subsystem apply one or more processes to separate defects which are apparent killer defects from defects that are not killer defects However, Rathert teaches wherein the characterization subsystem applies one or more processes to separate defects which are apparent killer defects from defects that are not killer defects (Rathert, Figure 2, step 210-212, [0117], “the method 200 includes a step 210 of identifying one or more at-risk (reads on killer defect) dies based on comparisons of manufacturing fingerprints of the one or more at-risk dies with the at least a portion of the manufacturing fingerprint of the failed die. In another embodiment, the method 200 includes a step 212 of recalling devices including the one or more additional dies. [0118] In another embodiment, the step 210 includes identifying a subset of the semiconductor dies 106 having manufacturing fingerprints 104 similar to that of a failed die 108 (e.g., at-risk dies 110) based on one or more selected similarity metrics. In this regard, the at-risk dies 110 are predicted to be susceptible to failure under similar operating conditions as the failed die 108. Accordingly, a targeted recall may be initiated to include only the at-risk dies 110. [0119] The step 210 may include comparing manufacturing fingerprints 104 using any analysis technique known in the art such as, but not limited to, classification, sorting, clustering, outlier detection, signal response metrology, regression analysis, instance-based analysis”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's method of identifying outlier defects to incorporate Rathert method and characterization analysis steps to identify the at-risk dies among the failed dies with the benefits of improvements in yield/quality, and increasing in testing efficiency. (Rathert, [0117]-[0122]). Regarding claim 2, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price further teaches the one or more processors further configured to execute the program instructions causing the one or more processors to: receive, via the characterization subsystem, the characterization measurements acquired by the one or more semiconductor fabrication subsystems during fabrication of the one or more semiconductor devices (Price, Figure 5, [0032],[0033] "The inspection system 500 may include one or more inline defect inspection tools 502 communicatively coupled to one or more computer processors 504.The processors 504 may also be configured to receive the inspection results obtained by the inline defect inspection tools 502 and aggregate the inspection results to obtain a plurality of aggregated results for the plurality of wafers"). Regarding claim 3, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price further teaches wherein the one or more characterization subsystems include one or more characterization tools configured to perform at least one of one or more inline defect inspection processes or one or more metrology processes (Price, Figure 5,[0032], "The inspection system 500 may include one or more inline defect inspection tools 502 communicatively coupled to one or more computer processors 504. The inline defect inspection tool(s) 502 may be configured to inspect a plurality of layers of a plurality of wafers 506 utilizing various inline inspection techniques"). Regarding claim 4, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price is silent on wherein the characterization subsystem is configured to employ at least one of an advanced deep learning technique or a machine learning technique to determine the plurality of apparent killer defects on the one or more semiconductor devices based on the characterization measurements. However, Teplinsky teaches wherein the characterization subsystem is configured to employ at least one of an advanced deep learning technique or a machine learning technique to determine the plurality of apparent killer defects on the one or more semiconductor devices based on the characterization measurements (Teplinsky, [0219], "The system may use an aggregated historical data or may employ a more sophisticated machine learning techniques to optimize its outlier finding performance"). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection machine learning technique with the benefits of a more sophisticated machine learning techniques to optimize its outlier finding performance. (Teplinsky, [0219]). Regarding claim 5, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price further teaches the one or more processors further configured to execute the program instructions causing the one or more processors to: receive, via the testing subsystem, the test measurements for the one or more semiconductor devices acquired by the one or more test tool subsystems (Price, Figure 5, [0032], [0033] "The inspection system 500 may include one or more inline defect inspection tools 502 communicatively coupled to one or more computer processors 504.The processors 504 may also be configured to receive the inspection results obtained by the inline defect inspection tools 502 and aggregate the inspection results to obtain a plurality of aggregated results for the plurality of wafers"). Regarding claim 6, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price further teaches wherein the one or more test tool subsystems include one or more test tools configured to perform at least one of one or more electrical wafer sort processes, unit probe processes, class probe processes, or final test processes (Price, [0027], "As shown in FIG. 3, a stacked defect map of a wafer collected from multiple critical process steps may be analyzed against a latent defect probability histogram 300. As previously described, a latent defect probability may be calculated for each die on the wafer based on the number of stacked defects, modified by size, rough bin classification, die location, care area, layer step weighting, and/or other types of defect measurement"). Regarding claim 7, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price further teaches wherein the at least one semiconductor die of the plurality of semiconductor dies passes all tests of the plurality of tests (Price, [0016] "Embodiments of the present disclosure are directed to methods and systems for inline part average testing and latent reliability defect recognition and/or detection. Latent reliability defects refer to defects present in a device from manufacturing that pass initial quality tests but cause premature failures when activated in their working environment"). Regarding claim 8, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price is silent on identifying specific location of the killer defect. However, Teplinsky teaches wherein the localization subsystem analyzes at least one of a location or a frequency of the at least one apparent killer defect of the plurality of apparent killer defects on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Teplinsky, Figure 10A-10F. [0149], (0149] Embodiments of the present disclosure relate to Geographic Part Average Test (GPAT) and Nearest Neighbor Residual (NNR). As an example, one outlier on a given area of one wafer can be determined, for example, one bad die on wafer edge compared to the rest of wafer edge. This can also be referred to as a bad die in a good neighborhood (BDGN). by examining the system test results, particularly a subset of the test results, not only the system, but the die can be determined as an outlier [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers [0199] FIG. 10 F is a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E"). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of reduced product returns, improvements in yield/quality, increases in testing efficiency. (Teplinsky, [0037]). Regarding claim 11, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 1, Price is silent on wherein the one or more reports include at least one chart configured to evaluate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage. However, Teplinsky teaches wherein the one or more reports include at least one chart configured to evaluate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage (Teplinsky, [0199], Figure 10A- 1 OF and Figure 11, "FIG. IOF is a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E. [0200] Although a single set of system test data is illustrated in FIGS. L0C and I0D, other system test data can be utilized. Table 24 shows an example of test system data corresponding to the board IDs"). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of reduced product returns, improvements in yield/quality, increases in testing efficiency. (Teplinsky, [0037]). Regarding claim 12, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 11, Price further teaches wherein the at least one chart is configured to compare a test cover gap trend over a range of time for a particular semiconductor device design (Price, [0028] "FIG. 4 is a flow diagram depicting an embodiment of an inline part average testing (I-PAT) method 400 configured in accordance with the present disclosure. As shown in FIG. 4, a wafer fabricator may choose to identify starting material which will ultimately undergo bum-in reliability testing (step 402). The wafer fabricator may also choose to perform inspection and metrology on all wafers at each critical step (e.g., 100% inspection and metrology) during the fabrication process (step 404). It is contemplated that inspection recipes may be utilized to help find all potential defects. In some embodiments, raw defect data may be included and recorded for subsequent analysis using one or more database or data storage devices by the analytics system 210, a determination can be made that one or more test protocols associated with the system testing can be eliminated, reduced, increased, added, or the like". It is understood that I-PAT average method is done over time.). Regarding claim 13, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 11, Price further teaches wherein the at least one chart is configured to compare a test cover gap for multiple semiconductor device designs (Price, [0006], "aggregate inspection results obtained from the one or more inspection tools to obtain a plurality of aggregated inspection results for the plurality of wafers; identify one or more statistical outliers among the plurality of wafers at least partially based on the plurality of aggregated inspection results obtained for the plurality of wafers"); Regarding claim 15, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 14, Price further teaches the one or more processors further configured to execute the program instructions causing the one or more processors to: generate one or more control signals based on the one or more adjustments to at least one of the fabrications, characterizing, or testing of the semiconductor devices. (Price, [0033], Figures 4-5, "The processors 504 may also be configured to receive the inspection results obtained by the inline defect inspection tools 502 and aggregate the inspection results to obtain a plurality of aggregated results for the plurality of wafers. The processors 504 may process the data received from the bum-in reliability testing tools 510 and/or field returns 512 along with the data received from the inline defect inspection tools 502 to correlate the inline inspection data with the data received from the burn-in reliability testing tools 510 and/or field returns 512. The purpose of performing this data correlation is to help identify which inspection steps, defect types, defect sizes, and/or metrology parameters are most likely to provide actionable data from which statistical outliers could be most effectively screened, help disqualify/eliminate low correlation inspection steps and may help improve the overall correlation, which may in turn reduce overkill and underkill. In some embodiments, wafers/dies that have been identified to have latent reliability issues may be reported on one or more display devices. Alternatively, wafers/dies that have been identified to have latent reliability issues may be identified or physically marked as defective or otherwise segregated for further evaluation, repurposing or rejected from entering the supply chain to help reduce the number of wafers/dies that may fail prematurely in the field."). Regarding claim 16, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 15, Price further teaches wherein the one or more control signals are configured to target select inline defect part average testing (I-PAT) care areas on the semiconductor devices (Price, Figure 4,5, 0017] "Methods and systems configured in accordance with the present disclosure may utilize inline part average (I-PAT) testing to provide latent reliability defect recognition. Part average testing (PAT) is a statistically based method for removing parts with abnormal characteristics (outliers) from the semiconductors supplied per guidelines established"). Regarding claim 17, Price teaches A method (Price, 0016] Embodiments of the present disclosure are directed to methods) comprising: determining, via a characterization subsystem of a controller, a plurality of apparent killer defects on one or more semiconductor devices based on characterization measurements of the one or more semiconductor devices acquired by one or more semiconductor fabrication subsystems, wherein the one or more semiconductor devices include a plurality of semiconductor dies;(Price, [0006], "aggregate inspection results obtained from the one or more inspection tools to obtain a plurality of aggregated inspection results for the plurality of wafers; identify one or more statistical outliers among the plurality of wafers at least partially based on the plurality of aggregated inspection results obtained for the plurality of wafers"); determining, via a testing subsystem of the controller, at least one semiconductor die of the plurality of semiconductor dies which passes at least one test of a plurality of tests based on test measurements acquired by one or more test tool subsystems;(Price, [0016] "Embodiments of the present disclosure are directed to methods and systems for inline part average testing and latent reliability defect recognition and/or detection. Latent reliability defects refer to defects present in a device from manufacturing that pass initial quality tests"); correlating, via a correlation subsystem of the controller, the characterization measurements with the test measurements to determine at least one apparent killer defect of the plurality of apparent killer defects on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Price, [0033], Figure 5, "The processors 504 may process the data received from the bum-in reliability testing tools 510 and/or field returns 512 along with the data received from the inline defect inspection tools 502 to correlate the inline inspection data with the data received from the burn-in reliability testing tools 510 and/or field returns 512. the purpose of performing this data correlation is to help identify which inspection steps, defect types, defect sizes, and/or metrology parameters are most likely to provide actionable data from which statistical outliers could be most effectively screened"); and Price is silent on identifying specific location of the killer defect. However, Teplinsky teaches, determine, via a localization subsystem, one or more gap areas on the one or more semiconductor devices for defect-based test coverage based on the at least one apparent killer defect on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Teplinsky, Figure 10A-10F. [0149], (0149] Embodiments of the present disclosure relate to Geographic Part Average Test (GPAT) and Nearest Neighbor Residual (NNR). As an example, one outlier on a given area of one wafer can be determined, for example, one bad die on wafer edge compared to the rest of wafer edge. This can also be referred to as a bad die in a good neighborhood (BDGN). by examining the system test results, particularly a subset of the test results, not only the system, but the die can be determined as an outlier [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers [0199] FIG. 10 Fis a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E). wherein report one or more reports based on the one or more gap areas in defect-based test coverage on the one or more semiconductor devices [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers”. Figure 7B, [0158], As illustrated in FIG. 7B, system test data 725 is shown correlated with the location on the substrate of the component incorporated into each system that is tested”. [0256], fig 17, At block 1735, an updated electronic test protocol may be formed. The updated electronic test protocol may be based on the characteristics of the electronic components. At block 1740, the updated electronic test protocol may be communicated). wherein the one or more reports include at least one metric to adjust at least one of the one or more semiconductor fabrication subsystems or the one or more test tool subsystems (Teplinsky, Figure 15, [0248], At block 1550, an outlier in the data subset may be identified. At block 1560, the information about the outlier may be communicated. For example, the information about the outlier may be communicated to a system manufacturer of a component manufacturer. The information about the outlier may identify a board and/or a component to which the outlier corresponds) to mitigate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage, wherein the mitigate the one or more gap areas includes targeting one or more care areas (Teplinsky, [0129]. “it is possible to implement semiconductor quality solutions on incoming material, which are tuned by system performance, which can be measured at system ( e.g.,board) test. Additionally, it is possible to set up outlier detection to evaluate only the tests that impact the system performance. Re-binning or holding of wafers/lots before assembly can be implemented”, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of improvements in yield/quality, and increasing in testing efficiency. (Teplinsky, [0037]). Both Price and Teplinsky teaches identifying outlier defects. Price and Teplinsky are silent of wherein the characterization subsystem apply one or more processes to separate defects which are apparent killer defects from defects that are not killer defects However, Rathert teaches wherein the characterization subsystem applies one or more processes to separate defects which are apparent killer defects from defects that are not killer defects (Rathert, Figure 2, step 210-212, [0117], “the method 200 includes a step 210 of identifying one or more at-risk (reads on killer defect) dies based on comparisons of manufacturing fingerprints of the one or more at-risk dies with the at least a portion of the manufacturing fingerprint of the failed die. In another embodiment, the method 200 includes a step 212 of recalling devices including the one or more additional dies. [0118] In another embodiment, the step 210 includes identifying a subset of the semiconductor dies 106 having manufacturing fingerprints 104 similar to that of a failed die 108 (e.g., at-risk dies 110) based on one or more selected similarity metrics. In this regard, the at-risk dies 110 are predicted to be susceptible to failure under similar operating conditions as the failed die 108. Accordingly, a targeted recall may be initiated to include only the at-risk dies 110. [0119] The step 210 may include comparing manufacturing fingerprints 104 using any analysis technique known in the art such as, but not limited to, classification, sorting, clustering, outlier detection, signal response metrology, regression analysis, instance-based analysis”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's method of identifying outlier defects to incorporate Rathert method and characterization analysis steps to identify the at-risk dies among the failed dies with the benefits of improvements in yield/quality, and increasing in testing efficiency. (Rathert, [0117]-[0122]). Regarding claim 18, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price further teaches further comprising: receiving, via the characterization subsystem of the controller, the characterization measurements acquired by the one or more semiconductor fabrication subsystems during fabrication of the one or more semiconductor devices. (Price, Figure 5, [0032], [0033] "The inspection system 500 may include one or more inline defect inspection tools 502 communicatively coupled to one or more computer processors 504.The processors 504 may also be configured to receive the inspection results obtained by the inline defect inspection tools 502 and aggregate the inspection results to obtain a plurality of aggregated results for the plurality of wafers"). Regarding claim 19, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price further teaches wherein the one or more characterization subsystems include one or more characterization tools configured to perform at least one of one or more inline defect inspection processes or one or more metrology processes (Price, Figure 5, [0032], "The inspection system 500 may include one or more inline defect inspection tools 502 communicatively coupled to one or more computer processors 504. The inline defect inspection tool(s) 502 may be configured to inspect a plurality of layers of a plurality of wafers 506 utilizing various inline inspection techniques"). Regarding claim 20, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price is silent on wherein the characterization subsystem is configured to employ at least one of an advanced deep learning technique or a machine learning technique to determine the plurality of apparent killer defects on the one or more semiconductor devices based on the characterization measurements. However, Teplinsky teaches wherein the characterization subsystem is configured to employ at least one of an advanced deep learning technique or a machine learning technique to determine the plurality of apparent killer defects on the one or more semiconductor devices based on the characterization measurements Teplinsky, [0219], "The system may use an aggregated historical data or may employ a more sophisticated machine learning techniques to optimize its outlier finding performance"). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection machine learning technique with the benefits of a more sophisticated machine learning techniques to optimize its outlier finding performance. (Teplinsky, [0219]). Regarding claim 21, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price teaches further comprising: receiving, via the testing subsystem of the controller, the test measurements for the one or more semiconductor devices acquired by the one or more test tool subsystems (Price, Figure 5,[0032],[0033] "The inspection system 500 may include one or more inline defect inspection tools 502 communicatively coupled to one or more computer processors 504.The processors 504 may also be configured to receive the inspection results obtained by the inline defect inspection tools 502 and aggregate the inspection results to obtain a plurality of aggregated results for the plurality of wafers"). Regarding claim 22, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price further teaches wherein the one or more test tool subsystems include one or more test tools configured to perform at least one of one or more electrical wafer sort processes, unit probe processes, class probe processes, or final test processes (Price, [0027], "As shown in FIG. 3, a stacked defect map of a wafer collected from multiple critical process steps may be analyzed against a latent defect probability histogram 300. As previously described, a latent defect probability may be calculated for each die on the wafer based on the number of stacked defects, modified by size, rough bin classification, die location, care area, layer step weighting, and/or other types of defect measurement"). Regarding claim 23, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price further teaches wherein the at least one semiconductor die of the plurality of semiconductor dies passes all tests of the plurality of tests (Price, [0016] "Embodiments of the present disclosure are directed to methods and systems for inline part average testing and latent reliability defect recognition and/or manufacturing that pass initial quality tests but cause premature failures when activated in their working environment"). Regarding claim 24, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 17, Price is silent on identifying specific location of the killer defect. However, Teplinsky teaches wherein the localization subsystem analyzes at least one of a location or a frequency of the at least one apparent killer defect of the plurality of apparent killer defects on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Teplinsky, Figure 10A-10F. [0149], (0149] Embodiments of the present disclosure relate to Geographic Part Average Test (GPAT) and Nearest Neighbor Residual (NNR). As an example, one outlier on a given area of one wafer can be determined, for example, one bad die on wafer edge compared to the rest of wafer edge. This can also be referred to as a bad die in a good neighborhood (BDGN). by examining the system test results, particularly a subset of the test results, not only the system, but the die can be determined as an outlier [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers [0199] FIG. 10 F is a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E"). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of reduced product returns, improvements in yield/quality, increases in testing efficiency. (Teplinsky, [0037]). Regarding claim 27, Combination of Price, Teplinsky, and Rathert teaches the method of Claim 17, Price is silent on wherein the one or more reports include at least one chart configured to evaluate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage. However, Teplinsky teaches wherein the one or more reports include at least one chart configured to evaluate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage. (Teplinsky, [0199] "FIG. I0F is a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E. [0200] Although a single set of system test data is illustrated in FIGS. L0C and I0D, other system test data can be utilized. Table 24 shows an example of test system data corresponding to the board IDs"). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of reduced product returns, improvements in yield/quality, increases in testing efficiency. (Teplinsky, [0037]). Regarding claim 28, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 27, Price further teaches wherein the at least one chart is configured to compare a test cover gap trend over a range of time for a particular semiconductor device Design (Price, [0028] "FIG. 4 is a flow diagram depicting an embodiment of an inline part average testing (I-PAT) method 400 configured in accordance with the present disclosure. As shown in FIG. 4, a wafer fabricator may choose to identify starting material which will ultimately undergo bum-in reliability testing (step 402). The wafer fabricator may also choose to perform inspection and metrology on all wafers at each critical step (e.g., 100% inspection and metrology) during the fabrication process (step 404). It is contemplated that inspection recipes may be utilized to help find all potential defects. In some embodiments, raw defect data may be included and recorded for subsequent analysis using one or more database or data storage devices by the analytics system 210, a determination can be made that one or more test protocols associated with the system testing can be eliminated, reduced, increased, added, or the like". It is understood that I-PAT average method is done over time.). Regarding claim 29, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 27, Price further teaches wherein the at least one chart is configured to compare a test cover gap for multiple semiconductor device designs (Price, [0006], "aggregate inspection results obtained from the one or more inspection tools to obtain a plurality of aggregated inspection results for the plurality of wafers; identify one or more statistical outliers among the plurality of wafers at least partially based on the plurality of aggregated inspection results obtained for the plurality of wafers"); Regarding claim 31, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 30, Price further teaches further comprising: generating, via the controller, one or more control signals based on the one or more adjustments to at least one of the fabrication, characterizing, or testing of the semiconductor devices (Price, [0033], Figures 4-5, "The processors 504 may also be configured to receive the inspection results obtained by the inline defect inspection tools 502 and aggregate the inspection results to obtain a plurality of aggregated results for the plurality of wafers. The processors 504 may process the data received from the bum-in reliability testing tools 510 and/or field returns 512 along with the data received from the inline defect inspection tools 502 to correlate the inline inspection data with the data received from the burn-in reliability testing tools 510 and/or field returns 512. The purpose of performing this data correlation is to help identify which inspection steps, defect types, defect sizes, and/or metrology parameters are most likely to provide actionable data from which statistical outliers could be most effectively screened, help disqualify/eliminate low correlation inspection steps and may help improve the overall correlation, which may in turn reduce overkill and underkill. In some embodiments, wafers/dies that have been identified to have latent reliability issues may be reported on one or more display devices. Alternatively, wafers/dies that have been identified to have latent reliability issues may be identified or physically marked as defective or otherwise segregated for further evaluation, repurposing or rejected from entering the supply chain to help reduce the number of wafers/dies that may fail prematurely in the field."). Regarding claim 32, Combination of Price, Teplinsky, and Rathert teaches the system of Claim 31, Price further teaches wherein the one or more control signals are configured to target select inline defect part average testing (I-PAT) care areas on the semiconductor devices (Price, Figure 4,5, 0017] "Methods and systems configured in accordance with the present disclosure may utilize inline part average (I-PAT) testing to provide latent reliability defect recognition. Part average testing (PAT) is a statistically based method for removing parts with abnormal characteristics (outliers) from the semiconductors supplied per guidelines established"). Regarding claim 33, Price teaches A system comprising: one or more semiconductor fabrication subsystems; one or more test tool subsystems; and a controller communicatively coupled to the one or more semiconductor fabrication subsystems and the one or more test tool subsystems, the controller including one or more processors configured to execute program instructions causing the one or more processors to: (Price, [0006] "The system may include one or more inspection tools configured to perform inline inspection and metrology on a plurality of wafers at a plurality of critical steps during wafer fabrication. The system may also include one or more processors in communication with the one or more inspection tools"): determine, via a characterization subsystem, a plurality of apparent killer defects on one or more semiconductor devices based on characterization measurements of the one or more semiconductor devices acquired by the one or more semiconductor fabrication subsystems, wherein the one or more semiconductor devices include a plurality of semiconductor dies (Price, [0006], "aggregate inspection results obtained from the one or more inspection tools to obtain a plurality of aggregated inspection results for the plurality of wafers; identify one or more statistical outliers among the plurality of wafers at least partially based on the plurality of aggregated inspection results obtained for the plurality of wafers"); determine, via a testing subsystem, at least one semiconductor die of the plurality of semiconductor dies which passes at least one test of a plurality of tests based on test measurements acquired by the one or more test tool subsystems (Price, [0016] "Embodiments of the present disclosure are directed to methods and systems for inline part average testing and latent reliability defect recognition and/or detection. Latent reliability defects refer to defects present in a device from manufacturing that pass initial quality tests"); and Price is silent on identifying specific location of the killer defect. However, Teplinsky teaches determine, via a localization subsystem, one or more gap areas on the one or more semiconductor devices for defect-based test coverage based on the at least one apparent killer defect on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests. (Teplinsky, Figure 10A-10F. [0149], (0149] Embodiments of the present disclosure relate to Geographic Part Average Test (GPAT) and Nearest Neighbor Residual (NNR). As an example, one outlier on a given area of one wafer can be determined, for example, one bad die on wafer edge compared to the rest of wafer edge. This can also be referred to as a bad die in a good neighborhood (BDGN). by examining the system test results, particularly a subset of the test results, not only the system, but the die can be determined as an outlier [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers [0199] FIG. 10 F is a diagram illustrating component outlier detection 1060 according to various aspects of the present disclosure. The location 1065 associated with components d1 and d4 has been determined as an outlier location for the set of substrates based on the aggregated system test data illustrated in FIG. I0E). wherein the one or more reports include at least one metric to adjust at least one of the one or more semiconductor fabrication subsystems or the one or more test tool subsystems (Teplinsky, Figure 15, [0248], At block 1550, an outlier in the data subset may be identified. At block 1560, the information about the outlier may be communicated. For example, the information about the outlier may be communicated to a system manufacturer of a component manufacturer. The information about the outlier may identify a board and/or a component to which the outlier corresponds) to mitigate the one or more gap areas on the one or more semiconductor devices for defect-based test coverage, wherein the mitigate the one or more gap areas includes targeting one or more care areas (Teplinsky, [0129]. “it is possible to implement semiconductor quality solutions on incoming material, which are tuned by system performance, which can be measured at system ( e.g.,board) test. Additionally, it is possible to set up outlier detection to evaluate only the tests that impact the system performance. Re-binning or holding of wafers/lots before assembly can be implemented”, report one or more reports based on the one or more gap areas in defect-based test coverage on the one or more semiconductor devices [0156], "various aspects of the present disclosure as described more fully herein may utilize system test data for systems incorporating specific components to identify components as outliers”. Figure 7B, [0158], As illustrated in FIG. 7B, system test data 725 is shown correlated with the location on the substrate of the component incorporated into each system that is tested”. [0256], fig 17, At block 1735, an updated electronic test protocol may be formed. The updated electronic test protocol may be based on the characteristics of the electronic components. At block 1740, the updated electronic test protocol may be communicated). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's system to incorporate Teplinsky's outlier location detection method through utilization and integration of component test data and system test data with the benefits of improvements in yield/quality, and increasing in testing efficiency. (Teplinsky, [0037]). Both Price and Teplinsky teaches identifying outlier defects. but silent on correlate, via a correlation subsystem, the characterization measurements with the test measurements to determine at least one apparent killer defect of the plurality of apparent killer defects on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests However rather teaches correlate, via a correlation subsystem, the characterization measurements with the test measurements to determine at least one apparent killer defect of the plurality of apparent killer defects on the at least one semiconductor die of the plurality of semiconductor dies which passes the at least one test of the plurality of tests (Rathert, Figure 2, step 210-212, [0117], “the method 200 includes a step 210 of identifying one or more at-risk (reads on killer defect) dies based on comparisons of manufacturing fingerprints of the one or more at-risk dies with the at least a portion of the manufacturing fingerprint of the failed die. In another embodiment, the method 200 includes a step 212 of recalling devices including the one or more additional dies. [0118] In another embodiment, the step 210 includes identifying a subset of the semiconductor dies 106 having manufacturing fingerprints 104 similar to that of a failed die 108 (e.g., at-risk dies 110) based on one or more selected similarity metrics. In this regard, the at-risk dies 110 are predicted to be susceptible to failure under similar operating conditions as the failed die 108. Accordingly, a targeted recall may be initiated to include only the at-risk dies 110. [0119] The step 210 may include comparing manufacturing fingerprints 104 using any analysis technique known in the art such as, but not limited to, classification, sorting, clustering, outlier detection, signal response metrology, regression analysis, instance-based analysis”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Price's method of identifying outlier defects to incorporate Rathert method and characterization analysis steps to identify the at-risk dies among the failed dies with the benefits of improvements in yield/quality, and increasing in testing efficiency. (Rathert, [0117]-[0122]). Regarding claim 34, Combination of Price, Teplinsky, and Rathert teaches the system of of claim 1, Price further teaches wherein reporting of the one or more reports is in response to a previous user selected time interval. (Price, [0078] In one non-limiting example, inline characterization may be performed at a select (e.g., critical) layer. At user-selectable time intervals (e.g., quarterly, monthly, weekly, or the like). Regarding claim 35, Combination of Price, Teplinsky, and Rathert teaches the method of of claim 17, Price further teaches wherein reporting of the one or more reports is in response to a previous user selected time interval. (Price, [0078] In one non-limiting example, inline characterization may be performed at a select (e.g., critical) layer. At user-selectable time intervals (e.g., quarterly, monthly, weekly, or the like), Regarding claim 35, Combination of Price and Teplinsky teaches the system of of claim 33, Price further teaches wherein reporting of the one or more reports is in response to a previous user selected time interval. (Price, [0078] In one non-limiting example, inline characterization may be performed at a select (e.g., critical) layer. At user-selectable time intervals (e.g., quarterly, monthly, weekly, or the like). Conclusion Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure Subramaniam et al (US 7494829 B2) recites “Systems and methods for identification of outlier semiconductor devices using data-driven statistical characterization are described herein. At least some preferred embodiments include a method that includes identifying a plurality of sample semiconductor chips that fail a production test as a result of subjecting the plurality of sample semiconductor chips to a stress inducing process, identifying at least one correlation between variations in a first sample parameter and variations in a second sample parameter (the sample parameters associated with the plurality of sample semiconductor chips) identifying as a statistical outlier chip any of a plurality of production semiconductor chips that pass the production test and that further do not conform to a parameter constraint generated based upon the at least one correlation identified and upon data associated with at least some of the plurality of production semiconductor chips, and segregating the statistical outlier chip from the plurality of production semiconductor chip.” (abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to DILARA SULTANA whose telephone number is (571)272-3861. The examiner can normally be reached Mon-Fri, 9 AM-5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, EMAN ALKAFAWI can be reached on (571) 272-4448. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DILARA SULTANA/Examiner, Art Unit 2858 /EMAN A ALKAFAWI/Supervisory Patent Examiner, Art Unit 2858 3/18/2026
Read full office action

Prosecution Timeline

May 14, 2021
Application Filed
Jun 25, 2021
Response after Non-Final Action
Jan 03, 2024
Non-Final Rejection — §103
Jul 11, 2024
Response Filed
Sep 26, 2024
Final Rejection — §103
Feb 28, 2025
Request for Continued Examination
Mar 04, 2025
Response after Non-Final Action
Mar 20, 2025
Non-Final Rejection — §103
Jun 27, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Dec 12, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589459
IN-SITU GRINDING WHEEL TOPOGRAPHY, POWER MONITORING, AND FEED/SPEED SCHEDULING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12571810
FLUID REMAINING AMOUNT MANAGEMENT DEVICE, ANALYSIS SYSTEM, FLUID REMAINING AMOUNT MANAGEMENT METHOD AND NON-TRANSITORY READABLE MEDIUM STORING FLUID REMAINING AMOUNT MANAGEMENT PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12571823
DETECTION OF LOSS OF NEUTRAL
2y 5m to grant Granted Mar 10, 2026
Patent 12560505
DETECTION OF STRUCTURAL ANOMALIES IN A PIPELINE NETWORK
2y 5m to grant Granted Feb 24, 2026
Patent 12540957
VOLTAGE MONITORING SYSTEM AND METHOD FOR HARVESTING ENERGY IN A COMPUTATIONAL ENVIRONMENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
95%
With Interview (+14.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 125 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month