DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on June 08, 2023, December 08, 2023, December 19, 2024, and August 26, 2025, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the Examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
ultrasonic flaw-detection device in claim 1;
defect candidate group selection unit in claims 1 and 3;
defect determination unit in claims 1, 7, and 9;
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
These limitations are interpreted broadly as hardware and software modules as explained on pages 8-9 of the Specification. These modules may utilize generic computer components such as memory and at least one processor for executing instructions, and a non-transitory computer readable medium, such as a hard disk, may be included for storage of software and recorded data.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-12, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over Townsend (US 2018/0315180 A1) in view of Al-Hashmy et al. (US 2022/0018811 A1), hereafter Al-Hashmy.
Regarding claim 1, Townsend teaches an ultrasonic flaw-detection system comprising:
an ultrasonic flaw-detection device configured to transmit an ultrasonic wave to a detection target, collect an ultrasonic echo wave reflected from the detection target, and then generate a signal data ([0070] “Ultrasonic NDT scanning is performed by an array of probes 91, which emit sound waves that propagate inside the pipe structure 90 and receive the echoes resulting from interactions with its front and back walls (FIG. 3). The amplitude of the echoes is measured along two dimensions, length x which is measured in a direction parallel to the length of the pipe 90, and time t. The resulting data may be represented as a numerical matrix with echo amplitudes, where each row i corresponds to a given propagation time t, and each column j corresponds to a given horizontal position xj.”);
a signal data preprocessor configured to preprocess the signal data (Paragraphs 0048-0051 and Fig. 2(a-d) show preprocessing of signal data to remove noise and irrelevant signals. 0048 specifically states that the noise removal can be performed on raw signal data before converting the data to image form.);
a defect candidate group selection unit configured to select a defect candidate group based on the preprocessed signal data and generate defect candidate signal data based on the selection (Townsend teaches clustering areas in the signal matrix based on their likelihood to contain defects. In Fig. 2(e), resulting clusters are shown, and cluster 1 is selected for examination since it is the most relevant area to check for defects. [0060] “The set of objects remaining after the filtering are then sorted into clusters according to a predefined criterion.” [0067] “For example, the area below a back wall in an NDT scan is not necessary for defect detection, and can therefore be discarded so that defect detection can be focused on the relevant area, as shown in FIG. 2(i). Similar processes may be applied to any analogous scenario in which layers of interest define boundaries for unwanted data.” Additionally, Townshend teaches identifying discontinuities in the signal data within cluster 1, and these areas are expected to contain a defect and are selected for examination. [0042] “To assist in the detection of a defect in the portion of interest identified using steps 51 to S4, in step S5 a location of a gap in the data forming the cluster identified as the portion of interest is identified as a site of a potential defect. For example, gaps in the back wall of an item undergoing testing may be identified as ranges in the axis parallel to the wall's bounding box (usually x-axis) which do not contain any part of any back wall object.” [0045] “Gap identifier 5 is configured to identify a location of a gap in the data forming the cluster as the portion of interest as a site of a potential defect.” [0063] “To assist in the detection of a defect in the layer of interest, a location of a gap in the data forming the cluster identified as the layer of interest is identified as a site of a potential defect. As shown by the dashed lines in FIG. 2(g) the position of missing data, i.e. discontinuities or gaps in the layer of interest, are noted.”);
an image data generator configured to generate image data based on the defect candidate signal data included in the defect candidate group (Townshend teaches selecting data gaps from the matrix signal data as defect candidate locations. This process can be applied to an ultrasonic image or signal matrix, but after identification of defect candidate locations, the locations are visualized as an image for an operator to inspect defects. [0065-0066] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled. The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.” [0082-0083] “In this case the user also wishes to identify gaps in the back wall. Every range of values in the x-axis (the layer axis) which contains no pixels from any objects in the cluster corresponding to the layer of interest, as shown in FIG. 5(h), is added to a list L1=[G1,0, G1,1, G1,2]=[[50, 60], [300, 305], [610, 630]]. The location of the back wall, and its gaps, are presented to the user, as shown in FIG. 5(i). A box is drawn around the back wall, and vertical lines are drawn on either side of each gap.”).
As shown above, Townshend teaches the process of selecting candidate defect locations from ultrasonic signal data and displaying an image of the locations to an operator who can view the images and identify defects. [0044] “After the portion of interest is identified, the portion of interest in the image may be visually analyzed by an operator to identify one or more defects in the item under test.” Thus, Townshend fails to teach utilization of machine learning (as required by the defect determination unit) for determining defects in each defect candidate group. More specifically, Townshend fails to teach a defect determination unit configured to determine whether there is a defect in the defect candidate group based on the image data.
However, Al-Hashmy teaches a defect determination unit configured to determine whether there is a defect in the defect candidate group based on the image data (Figs. 3-4 show the Aberration Detection System (ADS) taught by Al-Hashmy. The ADS receives an ultrasonic image of a damaged objects, such as a wind turbine blade, and utilizes machine learning to detect and classify defects in the images. [0008] “…receiving raw ultrasound scan image data of a test section comprising the material having internal defects or voids; sending an image rendering signal to cause a computer resource asset to display an ultrasound scan image based on the raw ultrasound scan image data; and receiving a label corresponding to the ultrasound scan image, the label including an aberration type, an aberration location or an aberration dimension of each aberration on the test section, wherein the aberration type comprises a harmful or potentially harmful aberration.”).
Townshend and Al-Hashmy are analogous in the art, because both teach methods of utilizing ultrasound waves for analyzing objects for defects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by automating the final step of defect detection. As shown above, Townsend teaches the process of detecting defect candidate groups from ultrasonic signal and outputting an ultrasonic image of a group for an operator to examine, but utilizing a machine learning technique to analyze the image would save additional time and reduce overall effort, which is a purpose of Townsend’s invention ([Townsend 0012] “By automating the initial part of the inspection process in this way, quality control engineers/technicians can complete their inspections in more efficient ways, taking less inspection time per test object and hence reducing the overall human effort and costs.”). Additionally, Al-Hashmy shares this motivation by identifying the need for a cost-effective technology solution for defect detection ([Al-Hashmy 0004] “Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.”).
Note to Applicant: Although not utilized in the rejections in this office action, Konishi (US 2020/0096454 A1; from the IDS submitted December 08, 2023) should also be considered if future amendments are made to claim 1. Konishi teaches operating on image data rather than signal data, but Konishi teaches the exact methodology of Claim 1 and would be obvious to combine with Townsend and/or Al-Hahsmy. In Figs. 3-16, Konishi directly shows analyzing an ultrasound image for potential defects, identifying a cluster around each region that has a potential defect, and further analyzing each cluster to determine if each cluster definitively obtains a defect.
Regarding claim 2, Townsend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 1. Townsend further teaches wherein the signal data preprocessor is configured to:
remove noise from the signal data ([0067] “For the back wall detection problem, or any analogous scenario, noise or any other unwanted data may be identified using the locations of any layers of interest. For example, the area below a back wall in an NDT scan is not necessary for defect detection, and can therefore be discarded so that defect detection can be focused on the relevant area, as shown in FIG. 2(i). Similar processes may be applied to any analogous scenario in which layers of interest define boundaries for unwanted data.” Additionally, Al-Hashmy teaches noise removal in detail in 0040.),
extract poles from the signal data (Figs. 2(g) and 2(h) show selecting discontinuities in the object of interest by analyzing the signal matrix data and extracting poles (areas with higher amplitudes) in the object of interest). [0063] “To assist in the detection of a defect in the layer of interest, a location of a gap in the data forming the cluster identified as the layer of interest is identified as a site of a potential defect. As shown by the dashed lines in FIG. 2(g) the position of missing data, i.e. discontinuities or gaps in the layer of interest, are noted. Given a cluster C corresponding to a layer of interest, every range of values in the layer axis (i.e. the axis parallel to the layers with respect to the bounding boxes) which contains no pixels from any objects in C is added to a list LC·=[GC,0, GC,1, . . . , GC,g], where each GC,i is a gap, and g is the total number of gaps. This may be repeated for every layer of interest.”), and
divide the signal data into a plurality of clusters having a certain size based on the pole ([0065] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled.” [0066] “The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.”).
Regarding claim 3, Townshend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 2. Townshend further teaches wherein the defect candidate group selection unit is configured to: determine whether a defect is included in the signal data belonging to each cluster based on a deep learning algorithm that uses each of the plurality of clusters as an input, and select the cluster determined to include a defect as the defect candidate group (Townsend teaches clustering areas in the signal matrix based on their likelihood to contain defects. In Fig. 2(e), resulting clusters are shown, and cluster 1 is selected for examination since it is the most relevant area to check for defects. Townshend also teaches selecting data gaps within a cluster from the matrix signal data as defect candidate locations. This process can be applied to an ultrasonic image or signal matrix, but after identification of defect candidate locations, the locations are visualized as an image for an operator to inspect defects. [0065-0066] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled. The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.” [0082-0083] “In this case the user also wishes to identify gaps in the back wall. Every range of values in the x-axis (the layer axis) which contains no pixels from any objects in the cluster corresponding to the layer of interest, as shown in FIG. 5(h), is added to a list L1=[G1,0, G1,1, G1,2]=[[50, 60], [300, 305], [610, 630]]. The location of the back wall, and its gaps, are presented to the user, as shown in FIG. 5(i). A box is drawn around the back wall, and vertical lines are drawn on either side of each gap.”).
Regarding claim 5, Townshend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 1. Townshend teaches detecting defect candidate clusters in ultrasonic signal matrix data and then converting the signal to ultrasound images for further defect detection, and Al-Hashmy further teaches wherein the image data generator is configured to generate a B-Scan image data and a C-Scan image data on the detection target, based on the signal data ([0039] “In a non-limiting embodiment, the solution can work with UT scan image data, such as, for example, C-scan image data. The UT image data can include, for example, A-scan ultrasound image, B-scan ultrasound image data, 0-degree advanced C-scan image data, angled C-scan image data, or D-scan ultrasound image data.”).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Townshend’s invention by using a B-Scan and a C-Scan image. Townshend’s method is applied to ultrasound signal matrix data and/or image data (see the input of Figure 1(b)), and Townshend teaches converting the signal matrix to image data for further human analysis [0065-0066]. Al-Hashmy teaches performing the further analysis using machine learning on B-Scan and/or C-Scan images. Thus, any person of ordinary skill in the art could reasonably be expected to utilize B-Scan or C-Scan images when combining the inventions of Townshend and Al-Hashmy (see the rationale of claim 1), because converting signal matrix data into B-Scan and C-Scan images is a well-known and routine process in the art.
Regarding claim 6, Townshend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 5. Townshend further teaches wherein the image data generator is configured to generate the image data on an area in which the defect candidate group is included in the detection target, based on the signal data included in the defect candidate group (Townsend teaches clustering areas in the signal matrix based on their likelihood to contain defects. In Fig. 2(e), resulting clusters are shown, and cluster 1 is selected for examination since it is the most relevant area to check for defects. Townshend also teaches selecting data gaps within a cluster from the matrix signal data as defect candidate locations. This process can be applied to an ultrasonic image or signal matrix, but after identification of defect candidate locations, the locations are visualized as an image for an operator to inspect defects. [0065-0066] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled. The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.” [0082-0083] “In this case the user also wishes to identify gaps in the back wall. Every range of values in the x-axis (the layer axis) which contains no pixels from any objects in the cluster corresponding to the layer of interest, as shown in FIG. 5(h), is added to a list L1=[G1,0, G1,1, G1,2]=[[50, 60], [300, 305], [610, 630]]. The location of the back wall, and its gaps, are presented to the user, as shown in FIG. 5(i). A box is drawn around the back wall, and vertical lines are drawn on either side of each gap.”).
However, Townsend is not specific about what type of scan image to generate; thus, Townsend fails to teach specifically utilizing B-Scan image data and C-Scan image data. However, Al-Hashmy teaches utilizing B-Scan image data and C-Scan image data ([0039] “In a non-limiting embodiment, the solution can work with UT scan image data, such as, for example, C-scan image data. The UT image data can include, for example, A-scan ultrasound image, B-scan ultrasound image data, 0-degree advanced C-scan image data, angled C-scan image data, or D-scan ultrasound image data.”).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Townshend’s invention by using a B-Scan and a C-Scan image. Townshend’s method is applied to ultrasound signal matrix data and/or image data (see the input of Figure 1(b)), and Townshend teaches converting the signal matrix to image data for further human analysis [0065-0066]. Al-Hashmy teaches performing the further analysis using machine learning on B-Scan and/or C-Scan images. Thus, any person of ordinary skill in the art could reasonably be expected to utilize B-Scan or C-Scan images when combining the inventions of Townshend and Al-Hashmy (see the rationale of claim 1), because converting signal matrix data into B-Scan and C-Scan images is a well-known and routine process in the art.
Regarding claim 7, Townsend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 1. Al-Hashmy further teaches wherein the defect determination unit is configured to determine whether each of the defect candidate groups has a defect based on a deep learning algorithm using the image data as an input ([0060] “The ADS system 100 can include at least one machine learning platform. The ADS system 100 includes a bus 105, a processor 110 and a storage 120. The ADS system 100 can include a network interface 130, an input-output (TO) interface 140, a driver unit 150, an aberration detection and evaluation (ADE) stack 160…” [0061] “The ADE stack 160 can include a feature extraction unit 162, a classification unit 164, an aberration predictor 166, and a labeler unit 168. The ADE stack 160 can include a machine learning (ML) platform, including, for example, one or more feedforward or feedback neural networks. The ML platform can include, for example, an artificial neural network (ANN), a convolutional neural network (CNN), a deep convolutional neural network (DCNN), a recurrent convolutional neural network (RCNN), a Mask-RCNN,…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by using machine learning for automating the step of defect detection. Townsend teaches automatically identifying defect candidate defect group in signal matrix data and generating ultrasonic images for human inspection, but utilizing a machine learning technique to analyze the image would save additional time and reduce overall effort, which is a purpose of Townsend’s invention ([Townsend 0012] “By automating the initial part of the inspection process in this way, quality control engineers/technicians can complete their inspections in more efficient ways, taking less inspection time per test object and hence reducing the overall human effort and costs.”). Additionally, Al-Hashmy shares this motivation by identifying the need for a cost-effective technology solution for defect detection ([Al-Hashmy 0004] “Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.”).
Regarding claim 8, Townsend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 7. Al-Hashmy further teaches wherein the deep learning algorithm is a you only look once (YOLO) algorithm or a Faster R-CNN algorithm ([0061] “The ADE stack 160 can include a machine learning (ML) platform, including, for example, one or more feedforward or feedback neural networks. The ML platform can include, for example, an artificial neural network (ANN), a convolutional neural network (CNN), a deep convolutional neural network (DCNN), a recurrent convolutional neural network (RCNN), a Mask-RCNN…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by automating the step of defect detection (See the rationale applied to claim 7 above). Additionally, the choice of a YOLO, R-CNN, or any other machine learning algorithm would be obvious to one of ordinary skill applying machine learning to analyze defects in ultrasonic images. As shown in 0061 of Al-Hashmy, many different machine learning options could be employed for this task.
Regarding claim 9, Townsend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 7. Al-Hashmy further teaches wherein the defect determination unit is configured to output whether there is a defect for each of the defect candidate groups and output, when there is a defect, a bounding box that surrounds the corresponding defect ([0079] “The aberration predictor 166 can be arranged to receive the resultant image cells and predict aberrations that might exist in the asset 10, including, for example, on an outer surface, in a wall portion, or an inner surface of the asset 10. The aberration predictor 166 can generate a confidence score for each image cell that indicates the likelihood that a bounding box includes an aberration. The aberration predictor 166 can interact with the classification unit 164 and perform bounding box classification, refinement and scoring based on the aberrations in the image represented by the UT image data.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by automating the step of defect detection (See the rationale applied to claim 7 above). Additionally, the choice of utilizing a bounding box would have been obvious to one of ordinary skill implementing a CNN for detecting a defect area in an image. Townsend taught annotating images to visually show areas with defects [0064-0067].
Regarding claim 10, Townsend teaches an ultrasonic flaw-detection method by the ultrasonic flaw-detection system, the method comprising:
transmitting an ultrasonic wave to a detection target, collecting an ultrasonic echo wave reflected from the detection target, and then generating a signal data ([0070] “Ultrasonic NDT scanning is performed by an array of probes 91, which emit sound waves that propagate inside the pipe structure 90 and receive the echoes resulting from interactions with its front and back walls (FIG. 3). The amplitude of the echoes is measured along two dimensions, length x which is measured in a direction parallel to the length of the pipe 90, and time t. The resulting data may be represented as a numerical matrix with echo amplitudes, where each row i corresponds to a given propagation time t, and each column j corresponds to a given horizontal position xj.”);
preprocessing the signal data (Paragraphs 0048-0051 and Fig. 2(a-d) show preprocessing of signal data to remove noise and irrelevant signals. 0048 specifically states that the noise removal can be performed on raw signal data before converting the data to image form.);
selecting a defect candidate group based on the preprocessed signal data and generate defect candidate signal data based on the selection (Townsend teaches clustering areas in the signal matrix based on their likelihood to contain defects. In Fig. 2(e), resulting clusters are shown, and cluster 1 is selected for examination since it is the most relevant area to check for defects. [0060] “The set of objects remaining after the filtering are then sorted into clusters according to a predefined criterion.” [0067] “For example, the area below a back wall in an NDT scan is not necessary for defect detection, and can therefore be discarded so that defect detection can be focused on the relevant area, as shown in FIG. 2(i). Similar processes may be applied to any analogous scenario in which layers of interest define boundaries for unwanted data.” Additionally, Townshend teaches identifying discontinuities in the signal data within cluster 1, and these areas are expected to contain a defect and are selected for examination. [0042] “To assist in the detection of a defect in the portion of interest identified using steps 51 to S4, in step S5 a location of a gap in the data forming the cluster identified as the portion of interest is identified as a site of a potential defect. For example, gaps in the back wall of an item undergoing testing may be identified as ranges in the axis parallel to the wall's bounding box (usually x-axis) which do not contain any part of any back wall object.” [0045] “Gap identifier 5 is configured to identify a location of a gap in the data forming the cluster as the portion of interest as a site of a potential defect.” [0063] “To assist in the detection of a defect in the layer of interest, a location of a gap in the data forming the cluster identified as the layer of interest is identified as a site of a potential defect. As shown by the dashed lines in FIG. 2(g) the position of missing data, i.e. discontinuities or gaps in the layer of interest, are noted.”);
generating image data based on the defect candidate signal data included in the defect candidate group (Townshend teaches selecting data gaps from the matrix signal data as defect candidate locations. This process can be applied to an ultrasonic image or signal matrix, but after identification of defect candidate locations, the locations are visualized as an image for an operator to inspect defects. [0065-0066] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled. The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.” [0082-0083] “In this case the user also wishes to identify gaps in the back wall. Every range of values in the x-axis (the layer axis) which contains no pixels from any objects in the cluster corresponding to the layer of interest, as shown in FIG. 5(h), is added to a list L1=[G1,0, G1,1, G1,2]=[[50, 60], [300, 305], [610, 630]]. The location of the back wall, and its gaps, are presented to the user, as shown in FIG. 5(i). A box is drawn around the back wall, and vertical lines are drawn on either side of each gap.”).
As shown above, Townshend teaches the process of selecting candidate defect locations from ultrasonic signal data and displaying an image of the locations to an operator who can view the images and identify defects. [0044] “After the portion of interest is identified, the portion of interest in the image may be visually analyzed by an operator to identify one or more defects in the item under test.” Thus, Townshend fails to teach utilization of machine learning (as required by the defect determination unit) for determining defects in each defect candidate group. More specifically, Townshend fails to teach determining whether there is a defect in the defect candidate group based on the image data.
However, Al-Hashmy teaches determining whether there is a defect in the defect candidate group based on the image data (Figs. 3-4 show the Aberration Detection System (ADS) taught by Al-Hashmy. The ADS receives an ultrasonic image of a damaged objects, such as a wind turbine blade, and utilizes machine learning to detect and classify defects in the images. [0008] “…receiving raw ultrasound scan image data of a test section comprising the material having internal defects or voids; sending an image rendering signal to cause a computer resource asset to display an ultrasound scan image based on the raw ultrasound scan image data; and receiving a label corresponding to the ultrasound scan image, the label including an aberration type, an aberration location or an aberration dimension of each aberration on the test section, wherein the aberration type comprises a harmful or potentially harmful aberration.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by automating the final step of defect detection. As shown above, Townsend teaches the process of detecting defect candidate groups from ultrasonic signal and outputting an ultrasonic image of a group for an operator to examine, but utilizing a machine learning technique to analyze the image would save additional time and reduce overall effort, which is a purpose of Townsend’s invention ([Townsend 0012] “By automating the initial part of the inspection process in this way, quality control engineers/technicians can complete their inspections in more efficient ways, taking less inspection time per test object and hence reducing the overall human effort and costs.”). Additionally, Al-Hashmy shares this motivation by identifying the need for a cost-effective technology solution for defect detection ([Al-Hashmy 0004] “Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.”).
Regarding claim 11, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 10. Townsend further teaches wherein the preprocessing the signal data comprises:
removing noise from the signal data ([0067] “For the back wall detection problem, or any analogous scenario, noise or any other unwanted data may be identified using the locations of any layers of interest. For example, the area below a back wall in an NDT scan is not necessary for defect detection, and can therefore be discarded so that defect detection can be focused on the relevant area, as shown in FIG. 2(i). Similar processes may be applied to any analogous scenario in which layers of interest define boundaries for unwanted data.” Additionally, Al-Hashmy teaches noise removal in detail in 0040.);
extracting poles from the signal data (Figs. 2(g) and 2(h) show selecting discontinuities in the object of interest by analyzing the signal matrix data and extracting poles (areas with higher amplitudes) in the object of interest). [0063] “To assist in the detection of a defect in the layer of interest, a location of a gap in the data forming the cluster identified as the layer of interest is identified as a site of a potential defect. As shown by the dashed lines in FIG. 2(g) the position of missing data, i.e. discontinuities or gaps in the layer of interest, are noted. Given a cluster C corresponding to a layer of interest, every range of values in the layer axis (i.e. the axis parallel to the layers with respect to the bounding boxes) which contains no pixels from any objects in C is added to a list LC·=[GC,0, GC,1, . . . , GC,g], where each GC,i is a gap, and g is the total number of gaps. This may be repeated for every layer of interest.”); and
dividing the signal data into a plurality of clusters having a certain size based on the pole ([0065] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled.” [0066] “The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.”).
Regarding claim 12, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 11. Townsend further teaches wherein the selecting a defect candidate group comprises determining whether a defect is included in the signal data belonging to each cluster based on a deep learning algorithm that uses each of the plurality of clusters as an input (Townsend teaches clustering areas in the signal matrix based on their likelihood to contain defects. In Fig. 2(e), resulting clusters are shown, and cluster 1 is selected for examination since it is the most relevant area to check for defects. Townshend also teaches selecting data gaps within a cluster from the matrix signal data as defect candidate locations. This process can be applied to an ultrasonic image or signal matrix, but after identification of defect candidate locations, the locations are visualized as an image for an operator to inspect defects. [0065-0066] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled. The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.” [0082-0083] “In this case the user also wishes to identify gaps in the back wall. Every range of values in the x-axis (the layer axis) which contains no pixels from any objects in the cluster corresponding to the layer of interest, as shown in FIG. 5(h), is added to a list L1=[G1,0, G1,1, G1,2]=[[50, 60], [300, 305], [610, 630]]. The location of the back wall, and its gaps, are presented to the user, as shown in FIG. 5(i). A box is drawn around the back wall, and vertical lines are drawn on either side of each gap.”).
Regarding claim 14, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 10. Townshend teaches detecting defect candidate clusters in ultrasonic signal matrix data and then converting the signal to ultrasound images for further defect detection, and Al-Hashmy further teaches wherein the generating image data comprises generating a B-Scan image data and a C-Scan image data on the detection target, based on the signal data ([0039] “In a non-limiting embodiment, the solution can work with UT scan image data, such as, for example, C-scan image data. The UT image data can include, for example, A-scan ultrasound image, B-scan ultrasound image data, 0-degree advanced C-scan image data, angled C-scan image data, or D-scan ultrasound image data.”).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Townshend’s invention by using a B-Scan and a C-Scan image. Townshend’s method is applied to ultrasound signal matrix data and/or image data (see the input of Figure 1(b)), and Townshend teaches converting the signal matrix to image data for further human analysis [0065-0066]. Al-Hashmy teaches performing the further analysis using machine learning on B-Scan and/or C-Scan images. Thus, any person of ordinary skill in the art could reasonably be expected to utilize B-Scan or C-Scan images when combining the inventions of Townshend and Al-Hashmy (see the rationale of claim 10), because converting signal matrix data into B-Scan and C-Scan images is a well-known and routine process in the art.
Regarding claim 15, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 14. Townsend further teaches wherein the generating image data comprises generating the image data on an area in which the defect candidate group is included in the detection target, based on the signal data included in the defect candidate group (Townsend teaches clustering areas in the signal matrix based on their likelihood to contain defects. In Fig. 2(e), resulting clusters are shown, and cluster 1 is selected for examination since it is the most relevant area to check for defects. Townshend also teaches selecting data gaps within a cluster from the matrix signal data as defect candidate locations. This process can be applied to an ultrasonic image or signal matrix, but after identification of defect candidate locations, the locations are visualized as an image for an operator to inspect defects. [0065-0066] “Missing data, i.e. discontinuities in a layer, may be presented by drawing any shape that encloses the corresponding range in list LC, for example by drawing vertical lines either side of the gap or by drawing a circle with the centre of the range as its centroid and the length of the range as its diameter. Gap boundaries may be drawn in any thickness, color, and may or may not be filled. The annotated image may be displayed on screen, saved to file, printed or presented by any other means or combination of means.” [0082-0083] “In this case the user also wishes to identify gaps in the back wall. Every range of values in the x-axis (the layer axis) which contains no pixels from any objects in the cluster corresponding to the layer of interest, as shown in FIG. 5(h), is added to a list L1=[G1,0, G1,1, G1,2]=[[50, 60], [300, 305], [610, 630]]. The location of the back wall, and its gaps, are presented to the user, as shown in FIG. 5(i). A box is drawn around the back wall, and vertical lines are drawn on either side of each gap.”).
However, Townsend is not specific about what type of scan image to generate; thus, Townsend fails to teach specifically utilizing B-Scan image data and C-Scan image data. However, Al-Hashmy teaches utilizing B-Scan image data and C-Scan image data ([0039] “In a non-limiting embodiment, the solution can work with UT scan image data, such as, for example, C-scan image data. The UT image data can include, for example, A-scan ultrasound image, B-scan ultrasound image data, 0-degree advanced C-scan image data, angled C-scan image data, or D-scan ultrasound image data.”).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Townshend’s invention by using a B-Scan and a C-Scan image. Townshend’s method is applied to ultrasound signal matrix data and/or image data (see the input of Figure 1(b)), and Townshend teaches converting the signal matrix to image data for further human analysis [0065-0066]. Al-Hashmy teaches performing the further analysis using machine learning on B-Scan and/or C-Scan images. Thus, any person of ordinary skill in the art could reasonably be expected to utilize B-Scan or C-Scan images when combining the inventions of Townshend and Al-Hashmy (see the rationale of claim 1), because converting signal matrix data into B-Scan and C-Scan images is a well-known and routine process in the art.
Regarding claim 16, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 10. Al-Hashmy further teaches wherein the determining whether there is a defect in the defect candidate group based on the image data comprises determining whether there is a defect in each of the defect candidate groups based on a deep learning algorithm using the image data as an input ([0060] “The ADS system 100 can include at least one machine learning platform. The ADS system 100 includes a bus 105, a processor 110 and a storage 120. The ADS system 100 can include a network interface 130, an input-output (TO) interface 140, a driver unit 150, an aberration detection and evaluation (ADE) stack 160…” [0061] “The ADE stack 160 can include a feature extraction unit 162, a classification unit 164, an aberration predictor 166, and a labeler unit 168. The ADE stack 160 can include a machine learning (ML) platform, including, for example, one or more feedforward or feedback neural networks. The ML platform can include, for example, an artificial neural network (ANN), a convolutional neural network (CNN), a deep convolutional neural network (DCNN), a recurrent convolutional neural network (RCNN), a Mask-RCNN,…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by using machine learning for automating the step of defect detection. Townsend teaches automatically identifying defect candidate defect group in signal matrix data and generating ultrasonic images for human inspection, but utilizing a machine learning technique to analyze the image would save additional time and reduce overall effort, which is a purpose of Townsend’s invention ([Townsend 0012] “By automating the initial part of the inspection process in this way, quality control engineers/technicians can complete their inspections in more efficient ways, taking less inspection time per test object and hence reducing the overall human effort and costs.”). Additionally, Al-Hashmy shares this motivation by identifying the need for a cost-effective technology solution for defect detection ([Al-Hashmy 0004] “Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.”).
Regarding claim 17, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 16. Al-Hashmy further teaches wherein the deep learning algorithm is a you only look once (YOLO) algorithm or a Faster R-CNN algorithm ([0061] “The ADE stack 160 can include a machine learning (ML) platform, including, for example, one or more feedforward or feedback neural networks. The ML platform can include, for example, an artificial neural network (ANN), a convolutional neural network (CNN), a deep convolutional neural network (DCNN), a recurrent convolutional neural network (RCNN), a Mask-RCNN…”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by automating the step of defect detection (See the rationale applied to claim 16 above). Additionally, the choice of a YOLO, R-CNN, or any other machine learning algorithm would be obvious to one of ordinary skill applying machine learning to analyze defects in ultrasonic images. As shown in 0061 of Al-Hashmy, many different machine learning options could be employed for this task.
Regarding claim 18, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 16. Al-Hashmy further teaches wherein the determining whether there is a defect in the defect candidate group based on the image data comprises outputting whether there is a defect in each of the defect candidate groups and outputting, when there is a defect, a bounding box that surrounds the corresponding defect ([0079] “The aberration predictor 166 can be arranged to receive the resultant image cells and predict aberrations that might exist in the asset 10, including, for example, on an outer surface, in a wall portion, or an inner surface of the asset 10. The aberration predictor 166 can generate a confidence score for each image cell that indicates the likelihood that a bounding box includes an aberration. The aberration predictor 166 can interact with the classification unit 164 and perform bounding box classification, refinement and scoring based on the aberrations in the image represented by the UT image data.”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by automating the step of defect detection (See the rationale applied to claim 16 above). Additionally, the choice of utilizing a bounding box would have been obvious to one of ordinary skill implementing a CNN for detecting a defect area in an image. Townsend taught annotating images to visually show areas with defects [0064-0067].
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Townsend (US 2018/0315180 A1) and Al-Hashmy (US 2022/0018811 A1), and further in view of Park et al. (System Invariant Method for Ultrasonic Flaw Classification in Weldments Using Residual Neural Network. Appl. Sci. 2022, 12, 1477.), hereafter Park.
Regarding claim 4, Townsend and Al-Hashmy teach the ultrasonic flaw-detection system of claim 3. Townsend teaches that portions of interest where defects are likely present in the ultrasonic signal matrix data are automatically determined, and Al-Hashmy teaches that many different machine learning models are applicable for finding defects in ultrasonic images in 0061. However, neither Townsend or Al-Hashmy specifically teach wherein the deep learning algorithm is a variational auto encoder (VAE) or a residual neural network (ResNet) for finding defect candidate locations in signal data.
However, Park teaches wherein the deep learning algorithm is a variational auto encoder (VAE) or a residual neural network (ResNet) (See all of section 4 and 5.3-5.4 for discussions of the ResNet architecture and performance. In this study, a ResNet was applied to receive ultrasonic signal data and classify the echo into a defect group to identify if a defect is present. The defect groups included: crack, lack of fusion, slag inclusion, porosity, and incomplete penetration.).
Townsend, Al-Hashmy, and Park are analogous in the art to the claimed invention, because all teach methods of analyzing ultrasonic signal and/or images for determining the presence of defects in an object. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by utilizing a ResNet for automating the process of defect detection. Townsend teaches automatically identifying defect candidate defect group in signal matrix data, but utilizing a machine learning technique specifically to analyze the signal data would save additional time and reduce overall effort, which is a purpose of Townsend’s invention ([Townsend 0012] “By automating the initial part of the inspection process in this way, quality control engineers/technicians can complete their inspections in more efficient ways, taking less inspection time per test object and hence reducing the overall human effort and costs.”). Additionally, Al-Hashmy shares this motivation by identifying the need for a cost-effective technology solution for defect detection ([Al-Hashmy 0004] “Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.” Regarding the use of a ResNet specifically, Park motivates the use of a ResNet over traditional CNN architectures (used by Al-Hashmy) since deep ResNets can offer better performance ([Park Section 4.2] “Conventional CNNs feature a weak spot in significantly deep networks, as their performance degrades after a certain depth. However, significantly deep networks are required to train with large amounts of data. ResNets solve this problem through their distinct network architecture as previously described, obtaining the best results by training 152 layers of ResNet using the CIFAR-10 dataset.”).
Note to Applicant: Although not utilized in the rejections in this office action, Ha (US 2024/0053302 A1), mentioned in the conclusion of this office action, should also be considered if future amendments are made to claims 4 or 13. Ha teaches the use of autoencoders for identifying defects in ultrasonic B-Scans and could reasonably be applied to the inventions of Townsend and Al-Hashmy by one of ordinary skill in the art similarly to how the ResNet is applied in the rationale above.
Regarding claim 13, Townsend and Al-Hashmy teach the ultrasonic flaw-detection method of claim 12. Townsend teaches that portions of interest where defects are likely present in the ultrasonic signal matrix data are automatically determined, and Al-Hashmy teaches that many different machine learning models are applicable for finding defects in ultrasonic images in 0061. However, neither Townsend or Al-Hashmy specifically teach wherein the deep learning algorithm is a variational auto encoder (VAE) or a residual neural network (ResNet) for finding defect candidate locations in signal data.
However, Park teaches wherein the deep learning algorithm is a variational auto encoder (VAE) or a residual neural network (ResNet) (See all of section 4 and 5.3-5.4 for discussions of the ResNet architecture and performance. In this study, a ResNet was applied to receive ultrasonic signal data and classify the echo into a defect group to identify if a defect is present. The defect groups included: crack, lack of fusion, slag inclusion, porosity, and incomplete penetration.).
Townsend, Al-Hashmy, and Park are analogous in the art to the claimed invention, because all teach methods of analyzing ultrasonic signal and/or images for determining the presence of defects in an object. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Townsend’s invention by utilizing a ResNet for automating the process of defect detection. Townsend teaches automatically identifying defect candidate defect group in signal matrix data, but utilizing a machine learning technique specifically to analyze the signal data would save additional time and reduce overall effort, which is a purpose of Townsend’s invention ([Townsend 0012] “By automating the initial part of the inspection process in this way, quality control engineers/technicians can complete their inspections in more efficient ways, taking less inspection time per test object and hence reducing the overall human effort and costs.”). Additionally, Al-Hashmy shares this motivation by identifying the need for a cost-effective technology solution for defect detection ([Al-Hashmy 0004] “Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.” Regarding the use of a ResNet specifically, Park motivates the use of a ResNet over traditional CNN architectures (used by Al-Hashmy) since deep ResNets can offer better performance ([Park Section 4.2] “Conventional CNNs feature a weak spot in significantly deep networks, as their performance degrades after a certain depth. However, significantly deep networks are required to train with large amounts of data. ResNets solve this problem through their distinct network architecture as previously described, obtaining the best results by training 152 layers of ResNet using the CIFAR-10 dataset.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kimura et al. (US 2022/0349386 A1) teaches methods for analyzing the temporal changes in intensity in back echoes of ultrasonic signal for determining the presence of defects in wind turbine blades.
Ha et al. (Autoencoder-based detection of near-surface defects in ultrasonic testing. Ultrasonics. Volume 119.) teaches methods for utilizing an autoencoder to analyze ultrasonic data and determine the presence of defects. Additionally, Ha et al. (US 2024/0053302 A1) further teaches this method.
Qi (CN 109507304 B) teaches methods for analyzing ultrasonic echo signals to determine the presence of defects in an object. The method includes identifying the peaks and troughs of the signal, removing noise, and analyzing segments of the echo signals.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC JAMES SHOEMAKER whose telephone number is (571)272-6605. The examiner can normally be reached Monday through Friday from 8am to 5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, JENNIFER MEHMOOD, can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Eric Shoemaker/
Patent Examiner
/JENNIFER MEHMOOD/ Supervisory Patent Examiner, Art Unit 2664