Prosecution Insights
Last updated: April 19, 2026
Application No. 18/524,040

METHOD OF IDENTIFYING NON-CONFORMANCE OF A COMPONENT

Non-Final OA §103
Filed
Nov 30, 2023
Examiner
ANSARI, TAHMINA N
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Rolls-Royce
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
743 granted / 868 resolved
+23.6% vs TC avg
Strong +18% interview lift
Without
With
+17.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
901
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 868 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status Claims 1-11 are pending in this application. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-4 and 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over Steibner et al. (US PGPub 2022/0358709 filed April 19, 2022 with provisional priority dated to May 5, 2021, hereby referred to as “Steibner”), in view of Ram et al. (US PGPub US 2021/0350114 A1, filed May 11, 2021), hereby referred to as “Ram”. Consider Claim 1. Steibner teaches: 1. A method of identifying non-conformance of a manufactured component, the method comprising: (Steibner: Examples described herein provide a method that includes obtaining, by a processing device, three-dimensional (3D) voxel data. The method further includes performing, by the processing device, gray value thresholding based at least in part on the 3D voxel data and assigning a classification value to at least one voxel of the 3D voxel data. The method further includes defining, by the processing device, segments based on the classification value. The method further includes filtering, by the processing device, the segments based on the classification value. The method further includes evaluating, by the processing device, the segments to identify a surface voxel per segment. The method further includes determining, by the processing device, a position of a surface point within the surface voxel. [0024], [0044]-[0051], Figure 1 , [0045] Referring now to FIG. 1, an embodiment of a CT system 100 for inspecting objects (or “specimen”), such as the specimen S of FIG. 1. It should be appreciated that while embodiments herein may illustrate or describe a particular type of CT system, this is for example purposes and the claims should not be so limited. In other embodiments, other types of CT systems having another trajectory, detector shape, or beam geometry, such as a fan-type or a cone-type CT system, for example, may also be used. The CT system 100 includes an inspection processing device 102, an x-ray source 104, a placement unit 106, a detector 108, a control device 110, a display 112, a memory 113 for storing computer-readable instructions and/or data, and an input operation unit 114) 1. performing a volumetric scan to generate a 3-dimensional (3D) image of a volume of space containing the component, the 3D image comprising a plurality of voxels, (Steibner: [0051] The inspection processing device 102 receives an input from the input operation unit 114, which is configured by an input device (e.g. keyboard, various buttons, a mouse) and is used by the operator to control the operation of the CT system 100. The inspection processing device 102 causes the control device 110 to implement actions indicated by the input received by the input operation unit 114. The control device 110 is a microprocessor-based system that controls different modules of the CT system 100. The control device 110 includes an x-ray control unit 130, a movement control unit 132, an image generation unit 134, and an image reconstruction unit 136. The x-ray control unit 130 controls the operation of the x-ray source 104. The movement control unit 132 controls the movement of the manipulator unit 120. The image generation unit 134 generates x-ray projected image data for the specimen S based on an output signal from detector 108. The image reconstruction unit 136 performs image reconstruction processing that creates a reconstructed image based on the projector image data for specimen S from each different projection direction as is known in the art. [0052] The reconstructed image is an image illustrating the structure of the interior and exterior of the specimen S that is positioned in between the x-ray source 104 and the detector 108. In an embodiment, the reconstructed image is output as voxel data (also referred to as “CT voxel data”). The voxel data is an absorption coefficient distribution of the specimen S. According to one or more embodiments described herein, the CT system 100 can be a fan-type or a cone-type CT system. In an embodiment, back projection, filtered back projection, and iterative reconstruction may be used in image reconstruction processing.) 1. each voxel representing a sub-volume within the volume of space and assigned a voxel value relating to properties of the material in the sub-volume; (Steibner: [0053] Metrology is the science of measurement. In order to use industrial x-ray CT for performing metrology tasks, such as dimensional measurements, a surface model needs to be extracted from 3D voxel data. For example, in order to perform metrology analyses on voxel data created by CT scanning (such as using the CT system 100), a surface needs to be determined from the voxel data. That is, surface points need to be identified from 3D voxel data captured by the CT system 100 where information is stored as 16-bit gray values for each voxel. The gray value represents the absorption capabilities for x-ray radiation of a volume in a particular position. For single-material data sets, there are typically two specific gray values, namely one for the material and one for the background surrounding the material. Due to the finite resolution of the CT scans, voxels at the interface between background and material can have a gray value between the gray value for the background and the gray value for the material. The gray value of the voxels at the interface between background and material can vary depending on the percentage of the volume of the voxel that belongs to material. The interface between background and material is located at the position of highest gray value change. Figure 5, [0068]-[0069]) 1. generating a component histogram giving the frequency of each voxel value within the volume of space; (Steibner: [0067]-[0069], Figure 5; [0068] At block 502, the processing system 300 generates a gray value histogram for the 3D voxel data. FIG. 6 depicts an example gray value histogram 600 according to one or more embodiments described herein. It should be appreciated that the gray value histogram 600 is one example of a gray value histogram and other examples are also possible. The gray value histogram 600 plots gray values (x-axis) against counts of gray values (y-axis), as shown by the curve 601. That is, the curve 601 represents a count of gray values for a particular gray value. [0069] With continued reference to FIG. 5, at block 504, the processing system 300 identifies peaks in the gray value histogram. For a 3D voxel data set created from a single-material object, the gray value histogram 600 exhibits two distinct peaks: a background gray value peak and material gray value peak. For example, as shown in FIG. 6, the curve 601 of FIG. 6 includes two peaks: a first (background) gray value peak 602 and a second (material) gray value peak 603. The width of each of the peaks 602, 603 is due to noise of the 3D data (which can be seen in the 2D slice 200 of FIG. 2)) 1. and comparing the component histogram to a template histogram, and identifying differences between the component histogram and the template histogram, and based on the comparison, determining whether there is a defect in the component. (Examiner Note: the classification values used to define segments is analogous in scope to a template histogram Steibner: [0071] With continued reference to FIG. 5, once the thresholds 606, 607 are identified, at block 508, the processing system 300 tags (i.e., classifies) voxels according to their gray values. That is, voxels are tagged (i.e., classified) based on a gray value associated with each of the voxels and assigned a classification value. For example, voxels having a gray value less than (<) the lower threshold (i.e., the threshold 606) are tagged as being background, denoted by a classification value “0.” Voxels having a gray value greater than (>) the upper threshold (i.e., the threshold 607) are tagged as being material, denoted by a classification value “2.” Voxels having a gray value between the lower threshold and the upper threshold are candidates for the surface (i.e., material boundary) and are denoted with the classification value “1.” [0072] Returning to the discussion of FIG. 4, at block 406, the processing system 300 defines segments based on the classification value from block 508 of FIG. 5. Segments are sequences of voxels tagged with a classification value of “1” in any dimension of the 3D array (x, y, z) with at least one voxel tagged with a classification value of “1.” An example is shown in FIG. 7, which depicts an example portion 700 of a slice of a CT volume according to one or more embodiments described herein.) Even if Steibner does not teach: 1. and comparing the component histogram to a template histogram, and identifying differences between the component histogram and the template histogram, and based on the comparison, determining whether there is a defect in the component. Ram teaches: 1. A method of identifying non-conformance of a manufactured component, the method comprising: (Ram: abstract, A system and a method for detecting a pile of material by an autonomous machine. A 3D point cloud method includes obtaining a 3D point cloud indicative of an environment having a material pile, performing a ground surface estimation on the point cloud to identify non-ground points, grouping the non-ground points into clusters based on a proximity metric, creating a normalized height histogram for each of the clusters, comparing the normalized height histogram of each cluster to a generalized pile histogram, and identifying a cluster as a pile based on the similarity between the normalized height histogram and the generalized pile histogram. A 2D image method includes obtaining a 2D image from an imaging device, calibrating the imaging device with respect to a coordinate frame of the machine, and autonomously detecting an image of a material pile in the two-dimensional image using a trained deep-learning neural network.) 1. performing a volumetric scan to generate a 3-dimensional (3D) image of a volume of space containing the component, the 3D image comprising a plurality of voxels, (Ram: [0022] Referring to FIG. 2, the machine 10 includes a system 40 for autonomously detecting material piles 34. The system 40 includes one or more sensors 42 configured to provide a two-dimensional (2D) image of an environment having a pile of material 34 and/or a three-dimensional (3D) map space representation indicative of an environment having a pile of material 34. For example, as shown in FIG. 2, the machine 10 may include a 2D imaging device 44, such as a mono camera, thermal camera, video camera, stereo camera, or some other imaging device (e.g., an image sensor). The machine 10 may also include a 3D sensing device 46, such as for example, a LIDAR (light detection and ranging) device, a RADAR (radio detection and ranging) device, a SONAR (sound navigation and ranging) device, a stereo camera, or any other device capable of providing a 3D map space representation (i.e. a 3D point cloud) indicative of an environment having a material pile 34. The one or more sensors 42 may be any suitable device known in the art for creating a 2D image and/or a 3D map space representation, or other suitable output for use in detecting a pile. [0023] The one or more sensors 42 may generate the image and/or a 3D map space representation of the material pile 34 and communicate the image and/or a 3D map space representation to an on-board processor 48 for subsequent conditioning. The one or more sensors 42 are communicatively coupled to the processor 48 in any suitable manner. The processor 48 may be configured in a variety of ways. The processor 48 may embody a single microprocessor or multiple microprocessors. The processor 48 may be dedicated to the function of pile detection or may provide additional functionality to the machine 10, such as an engine control module (ECM).) 1. generating a component histogram giving the frequency of each voxel value within the volume of space; (Ram: [0039] In step 210, for each cluster, the processor 48 calculates a histogram of the normalized height of the points from the ground surface. For example, the lowest point in the cluster is assigned a value of zero and the highest point in the cluster is assigned a value of 1. All the points with heights between the highest and lowest point are assigned numeric values between zero and 1 in proportion to their height relative to the highest and lowest points. A histogram of the normalized height data is then created.[0040] In step 212, the processor 48 compares the histogram to a pile descriptor template that has been previously created and is accessible by the processor 48. Regarding the pile descriptor template, material piles generally have a similar shape regardless of the type of material. Therefore, the normalized height histogram of one material pile is generally similar to the normalized height histogram of other material piles. As such, a generalized normalized histogram of a material pile is created from previous data collected on other piles and made available to the processor 48. The generalized pile histogram is used as a representation of most piles (i.e., a pile descriptor template). In some embodiments, multiple generalized pile histograms can be created and each can be compared to the histogram of the normalized height of the points from the ground surface for each cluster.) 1. and comparing the component histogram to a template histogram, and identifying differences between the component histogram and the template histogram, and based on the comparison, determining whether there is a defect in the component. (Ram: [0040] In step 212, the processor 48 compares the histogram to a pile descriptor template that has been previously created and is accessible by the processor 48. Regarding the pile descriptor template, material piles generally have a similar shape regardless of the type of material. Therefore, the normalized height histogram of one material pile is generally similar to the normalized height histogram of other material piles. As such, a generalized normalized histogram of a material pile is created from previous data collected on other piles and made available to the processor 48. The generalized pile histogram is used as a representation of most piles (i.e., a pile descriptor template). In some embodiments, multiple generalized pile histograms can be created and each can be compared to the histogram of the normalized height of the points from the ground surface for each cluster. [0041] As the output of the comparison of the histogram of the normalized height of the points from the ground surface for each cluster to the pile descriptor template, the processor 48 calculates a similarity score. A similarity score is a way to represent how confident the algorithm is that the normalized height histogram matches the descriptor template. For example, a similarity score may be a numerical value indicating how well the normalized histogram matches the pile descriptor template. In one example, the similarity score can be a value between 0 to 100, where 0 represents not matching at all, while 100 represents a perfect match. Calculating a similarity score between data sets is known in the art. Any suitable methodology for calculating a similarity score between the histogram of the normalized height of the points from the ground surface for each cluster and the pile descriptor template may be used. [0042]-[0045]) It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to apply the teachings of Steibner for leveraging 3D volumetric data for surface determination within Ram’s field of endeavor for autonomous detection and determination of piles. The determination of obviousness is predicated upon the following findings: Steibner’s teachings would be a known improvement for the overall analysis of 3D volumetric data and one skilled in the art would have been motivated to modify Steibner in order to apply the improvement to the field of pile or workpiece defect detection as proposed by Ram. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Steibner, while the teaching of Ram continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of leveraging volumetric image analysis for defect detection in piles. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Consider Claim 2. The combination of Steibner and Ram teaches: 2. A method according to claim 1, wherein the comparison comprises comparing a characteristic of the component histogram and the template histogram, wherein the characteristic comprises any of; peak frequencies of voxel value; integrated total area; standard deviation of voxel value; height of peak frequencies; mean voxel value; width of a frequency bell curve;gradient of a frequency bell curve; median voxel value; and/or onset of a curve up to a peak. (Ram: [0033] Once a material pile 34 is detected in the image frame, the centroid (center) of the material pile 34 is calculated by averaging the position of all the pixels that belong to the material pile 34. The position can be transformed using the camera projection matrix and the extrinsic calibration, into the machine frame, so that the autonomy system would be able to know the position and location of the material pile 34, with respect to the machine 10. [0034] FIG. 6 illustrates another exemplary method 200 used by the machine 10 for detecting the material pile 34. The method 100 uses a system including the 3D sensing device 46 and the processor 48 on the machine 10 with support to perform accelerated computing. The method 200 uses a geometric technique for detecting a pile based on a 3D point cloud. A 3D point cloud is a set of data points (XYZ coordinates) in space. Steibner: [0056]-[0057], [0070]-[0072], [0056] Conventional approaches to addressing these problems with surface determination include global thresholding and local thresholding. Global thresholding creates a histogram of gray values to identify gray values for background and material from peaks. To do this, the value is calculated using an ISO50 threshold approach, which calculates an ISO50 value by averaging the gray value for background and material accordingly. Then, 3D linear interpolation is used between voxel centers to find all points with a gray value equal to the ISO50 value. This approach is imprecise because it only uses the average value, which is susceptible to outlier bias and does not consider local variations.) Consider Claim 3. The combination of Steibner and Ram teaches: 3. A method according to claim 1, wherein determining that there is a defect is based on the comparison of the characteristic of the component histogram to the template histogram showing a difference beyond a threshold. (Ram: [0042] In step 214, the processor 48 identifies the clusters with the highest similarity scores and treats those clusters as being material piles. In some embodiments, a similarity score threshold may be set and the processor compares the similarity score of each cluster with the similarity score threshold. Any cluster having a similarity score at or greater than threshold is considered by the processor 48 to be a material pile. [0043] In step 216, for the clusters identified as being material piles, the location of the material pile 34 is determined. The location of the piles may be determined in any suitable manner. For example, the centroid (center) of the pile can be calculated and then transformed, if needed, into the machine or world frame. For example, for 3D cloud point data, each of the points has 3D cartesian coordinates (x,y,z) in space that allow the location of these points to be readily identified. Depending on which coordinate frame the 3D point data is in, a transformation to the world frame or machine frame may be needed. If the 3D point data is in the world frame or machine frame, no transformation would need to be done to allow the location of the piles to be determined. If, however, the 3D point data is in the sensing device frame, then an extrinsic calibration with respect to the coordinate frame of the machine 10 can be measured using a device such as a Universal Total Station. Steibner: Once the peaks 602, 603 are identified, thresholds can be identified. For example, at block 506 of FIG. 5, the processing system 300 identifies a lower (first) threshold 606 and an upper (second) threshold 607, which can be used to filter out non-surface voxels. The thresholds 606, 607 are set based on knee points of the peaks 602, 603. The knee points of the peaks 602, 603 represent a point of maximum curvature, which is a mathematical measure of how much a function differs from a straight line. The knee points are shown in FIG. 6 as thresholds 606, 607. The knee points can be determined, for example, using the “kneedle algorithm.” A mathematical definition of curvature for continuous functions is defined using the following equation: for any continuous function f, there exists a standard closed-form Kf(x) that defines the curvature of the continuous function f, at any point as a function of its first and second derivative: [0071] With continued reference to FIG. 5, once the thresholds 606, 607 are identified, at block 508, the processing system 300 tags (i.e., classifies) voxels according to their gray values. That is, voxels are tagged (i.e., classified) based on a gray value associated with each of the voxels and assigned a classification value. For example, voxels having a gray value less than (<) the lower threshold (i.e., the threshold 606) are tagged as being background, denoted by a classification value “0.” Voxels having a gray value greater than (>) the upper threshold (i.e., the threshold 607) are tagged as being material, denoted by a classification value “2.” Voxels having a gray value between the lower threshold and the upper threshold are candidates for the surface (i.e., material boundary) and are denoted with the class) Consider Claim 4. The combination of Steibner and Ram teaches: 4. A method according to claim 3, wherein the threshold is determined dependent on the geometry of the component and/or the materials within the component. (Ram: [0034] FIG. 6 illustrates another exemplary method 200 used by the machine 10 for detecting the material pile 34. The method 100 uses a system including the 3D sensing device 46 and the processor 48 on the machine 10 with support to perform accelerated computing. The method 200 uses a geometric technique for detecting a pile based on a 3D point cloud. A 3D point cloud is a set of data points (XYZ coordinates) in space. [0036] FIG. 7 illustrates an example of a 3D point cloud 300 of same scene as shown in annotated image 120. In particular, the 3D point cloud depicts the base ground surface 121, the first material pile 122, the second material pile 124, the person standing 126, and the boulder 128. Once a 3D point cloud is obtained, a ground surface estimation (GSE) can be performed on the 3D point cloud, at step 204. GSE is known in the art. Any suitable GSE technique or algorithm may be used. Generally, a GSE algorithm identifies and extracts the major ground surface from a 3D point cloud via a series of mathematical and geometric calculations such as normal calculation, unevenness estimation, plane fitting, and random sampling consensus (RANSAC). [0037] In step 206, the output of the GSE assigns a label to the points which are considered ground and a different label to the points which are considered non-ground (i.e., everything above or below the ground surface). FIG. 8 is an example of the output of the GSE 302 for the 3D point cloud of FIG. 7. In FIG. 8, the ground points 304 are shown in a lighter shade or color, while the non-ground points 306 are shown in a darker shade or color. [0044], [0046], [0044] In some embodiments, the machine 10 may utilize a combination or fusion of both the method 100 which involves inputting 2D images into the deep learning neural network trained to perform the task of pile detection and the method 200 which uses a geometric technique for detecting a pile based on a 3D point cloud. For example, the one or more sensors 42 one the machine 10 may include both a camera 44 or some other 2D imaging device and a 3D sensing device 46, such as LIDAR or a stereo camera. If the 3D sensing device 46 is a stereo camera, it automatically proves the 3D point cloud and the 2D image as one message and can be used for both methods.) Consider Claim 6. The combination of Steibner and Ram teaches: 6. A method according to claim 1, wherein the template histogram is derived by performing a volumetric scan on one or more components known to have few or no defects. (Ram: [0040] In step 212, the processor 48 compares the histogram to a pile descriptor template that has been previously created and is accessible by the processor 48. Regarding the pile descriptor template, material piles generally have a similar shape regardless of the type of material. Therefore, the normalized height histogram of one material pile is generally similar to the normalized height histogram of other material piles. As such, a generalized normalized histogram of a material pile is created from previous data collected on other piles and made available to the processor 48. The generalized pile histogram is used as a representation of most piles (i.e., a pile descriptor template). In some embodiments, multiple generalized pile histograms can be created and each can be compared to the histogram of the normalized height of the points from the ground surface for each cluster.) Consider Claim 7. The combination of Steibner and Ram teaches: 7. A method according to claim 1, further comprising:performing thresholding to separate the component from a background within the volume of space, generating the component histogram based on the voxels from the component only, by removing the voxels from the background. (Steibner: [0079] At block 808, the processing system identifies a threshold 904. The threshold 904 is used to filter out lower gradient magnitudes (i.e., gradient magnitudes below the threshold 904). According to an example, the threshold 904 is identified using a right knee approach (as described herein) with respect to the peak 902. According to another example, the threshold 904 is identified using a minimum between the peaks 902 and 903. Once the threshold 904 is identified, the gradient magnitudes for each tagged voxel are compared to the threshold. Any tagged voxels having a gradient magnitude less than the threshold are untagged because these voxels are not considered to include the surface (e.g., such voxels are noise, are near a surface voxel, etc.). In an example, any segment containing only untagged voxels after the gradient thresholding are dropped while any segments with at least one voxel that is still tagged are kept for evaluation at block 410. Ram: [0037] In step 206, the output of the GSE assigns a label to the points which are considered ground and a different label to the points which are considered non-ground (i.e., everything above or below the ground surface). FIG. 8 is an example of the output of the GSE 302 for the 3D point cloud of FIG. 7. In FIG. 8, the ground points 304 are shown in a lighter shade or color, while the non-ground points 306 are shown in a darker shade or color. [0038] In step 208, with each point classified as ground and non-ground, only the non-ground points are selected for further processing. The non-ground points are then grouped into clusters based on a point-to-point proximity metric (i.e., non-ground points in close proximity to each other are grouped as part of the same cluster). Clustering algorithms based on a proximity metric are known in the art. Any suitable clustering algorithm may be used to group the non-ground points into clusters. [0039] In step 210, for each cluster, the processor 48 calculates a histogram of the normalized height of the points from the ground surface. For example, the lowest point in the cluster is assigned a value of zero and the highest point in the cluster is assigned a value of 1. All the points with heights between the highest and lowest point are assigned numeric values between zero and 1 in proportion to their height relative to the highest and lowest points. A histogram of the normalized height data is then created.) Consider Claim 8. The combination of Steibner and Ram teaches: 8. A method according to claim 1, comprising:splitting the volume of space into separate smaller secondary-volumes; and generating separate component histograms for each secondary-volume and comparing against a respective histogram template. (Steibner: [0079] At block 808, the processing system identifies a threshold 904. The threshold 904 is used to filter out lower gradient magnitudes (i.e., gradient magnitudes below the threshold 904). According to an example, the threshold 904 is identified using a right knee approach (as described herein) with respect to the peak 902. According to another example, the threshold 904 is identified using a minimum between the peaks 902 and 903. Once the threshold 904 is identified, the gradient magnitudes for each tagged voxel are compared to the threshold. Any tagged voxels having a gradient magnitude less than the threshold are untagged because these voxels are not considered to include the surface (e.g., such voxels are noise, are near a surface voxel, etc.). In an example, any segment containing only untagged voxels after the gradient thresholding are dropped while any segments with at least one voxel that is still tagged are kept for evaluation at block 410. Ram: [0037] In step 206, the output of the GSE assigns a label to the points which are considered ground and a different label to the points which are considered non-ground (i.e., everything above or below the ground surface). FIG. 8 is an example of the output of the GSE 302 for the 3D point cloud of FIG. 7. In FIG. 8, the ground points 304 are shown in a lighter shade or color, while the non-ground points 306 are shown in a darker shade or color. [0038] In step 208, with each point classified as ground and non-ground, only the non-ground points are selected for further processing. The non-ground points are then grouped into clusters based on a point-to-point proximity metric (i.e., non-ground points in close proximity to each other are grouped as part of the same cluster). Clustering algorithms based on a proximity metric are known in the art. Any suitable clustering algorithm may be used to group the non-ground points into clusters. [0039] In step 210, for each cluster, the processor 48 calculates a histogram of the normalized height of the points from the ground surface. For example, the lowest point in the cluster is assigned a value of zero and the highest point in the cluster is assigned a value of 1. All the points with heights between the highest and lowest point are assigned numeric values between zero and 1 in proportion to their height relative to the highest and lowest points. A histogram of the normalized height data is then created.) Consider Claim 9. The combination of Steibner and Ram teaches: 9. A method according to claim 1, comprising performing x-ray computational tomography.(Steibner: [0007] In addition to one or more of the features described above or below, or as an alternative, further embodiments may include that obtaining the 3D voxel data includes: creating, by a computed tomography (CT) system, two-dimensional (2D) x-ray projection images; and constructing, by the CT system, the 3D voxel data based at least in part on the 2D x-ray projection images. [0045] Referring now to FIG. 1, an embodiment of a CT system 100 for inspecting objects (or “specimen”), such as the specimen S of FIG. 1. It should be appreciated that while embodiments herein may illustrate or describe a particular type of CT system, this is for example purposes and the claims should not be so limited. In other embodiments, other types of CT systems having another trajectory, detector shape, or beam geometry, such as a fan-type or a cone-type CT system, for example, may also be used. The CT system 100 includes an inspection processing device 102, an x-ray source 104, a placement unit 106, a detector 108, a control device 110, a display 112, a memory 113 for storing computer-readable instructions and/or data, and an input operation unit 114. In an embodiment, the x-ray source 104 emits x-rays in a cone shape 105 in the Z direction in the coordinate frame of reference 116 along an optical axis from an emission point in accordance with control by the control device 110. The emission point corresponds to the focal point of the x-ray source 104. That is, the optical axis connects the emission point, which is the focal point of the x-ray source 104, with the center of the imaging capture region of the detector 108. It should be appreciated that the x-ray source 104, instead of one emitting x-rays in a cone shape, can also be one emitting x-rays in a fan-shape for example.) Consider Claim 10. The combination of Steibner and Ram teaches: 10. A non-transitory computer readable medium comprising computer-readable instructions that, when read by a computer, causes the performance of a method in accordance with claim 1. (Steibner: [0051] The inspection processing device 102 receives an input from the input operation unit 114, which is configured by an input device (e.g. keyboard, various buttons, a mouse) and is used by the operator to control the operation of the CT system 100. The inspection processing device 102 causes the control device 110 to implement actions indicated by the input received by the input operation unit 114. The control device 110 is a microprocessor-based system that controls different modules of the CT system 100. The control device 110 includes an x-ray control unit 130, a movement control unit 132, an image generation unit 134, and an image reconstruction unit 136. The x-ray control unit 130 controls the operation of the x-ray source 104. The movement control unit 132 controls the movement of the manipulator unit 120. The image generation unit 134 generates x-ray projected image data for the specimen S based on an output signal from detector 108. The image reconstruction unit 136 performs image reconstruction processing that creates a reconstructed image based on the projector image data for specimen S from each different projection direction as is known in the art. Ram: [0024] The system 40 also includes memory 50. The memory 50 may be integral to the processor 48 or remote but accessible by the processor 48. The memory 50 may be a read only memory (ROM) for storing a program(s), a neural network, or other information, a random access memory (RAM) which serves as a working memory area for use in executing the program(s) stored in the memory 50, or a combination thereof. The processor 48 may be configured to refer to information stored in the memory 50 and the memory 50 may be configured to store various information determined by the processor 48.) Consider Claim 11. The combination of Steibner and Ram teaches: 11. A computer program that, when read by a computer causes the performance of a method in accordance with claim 1. (Ram: [0024] The system 40 also includes memory 50. The memory 50 may be integral to the processor 48 or remote but accessible by the processor 48. The memory 50 may be a read only memory (ROM) for storing a program(s), a neural network, or other information, a random access memory (RAM) which serves as a working memory area for use in executing the program(s) stored in the memory 50, or a combination thereof. The processor 48 may be configured to refer to information stored in the memory 50 and the memory 50 may be configured to store various information determined by the processor 48. Steibner: [0051] The inspection processing device 102 receives an input from the input operation unit 114, which is configured by an input device (e.g. keyboard, various buttons, a mouse) and is used by the operator to control the operation of the CT system 100. The inspection processing device 102 causes the control device 110 to implement actions indicated by the input received by the input operation unit 114. The control device 110 is a microprocessor-based system that controls different modules of the CT system 100. The control device 110 includes an x-ray control unit 130, a movement control unit 132, an image generation unit 134, and an image reconstruction unit 136. The x-ray control unit 130 controls the operation of the x-ray source 104. The movement control unit 132 controls the movement of the manipulator unit 120. The image generation unit 134 generates x-ray projected image data for the specimen S based on an output signal from detector 108. The image reconstruction unit 136 performs image reconstruction processing that creates a reconstructed image based on the projector image data for specimen S from each different projection direction as is known in the art. [0089] Terms such as processor, processing device, controller, computer, digital signal processor (DSP), field-programmable gate array (FPGA), etc. are understood in this document to mean a computing device that may be located within an instrument, distributed in multiple elements throughout an instrument, or placed external to an instrument.) Claims 5 is rejected under 35 U.S.C. 103 as being unpatentable over Steibner et al. (US PGPub 2022/0358709 filed April 19, 2022 with provisional priority dated to May 5, 2021, hereby referred to as “Steibner”), in view of Ram et al. (US PGPub US 2021/0350114 A1, filed May 11, 2021), hereby referred to as “Ram” further in view of Sreenivisam (US PGPub 2 0090214761 A1, hereby referred to as “Sreenivisam”). Consider Claim 5. The combination of Steibner and Ram teaches: 5. A method according to claim 1, wherein the defect is a gross defect which requires scrapping or remaking of the component. (Steibner: [0079] At block 808, the processing system identifies a threshold 904. The threshold 904 is used to filter out lower gradient magnitudes (i.e., gradient magnitudes below the threshold 904). According to an example, the threshold 904 is identified using a right knee approach (as described herein) with respect to the peak 902. According to another example, the threshold 904 is identified using a minimum between the peaks 902 and 903. Once the threshold 904 is identified, the gradient magnitudes for each tagged voxel are compared to the threshold. Any tagged voxels having a gradient magnitude less than the threshold are untagged because these voxels are not considered to include the surface (e.g., such voxels are noise, are near a surface voxel, etc.). In an example, any segment containing only untagged voxels after the gradient thresholding are dropped while any segments with at least one voxel that is still tagged are kept for evaluation at block 410. Ram: [0004] The controller standardizes and normalizes each signal from the plurality of signals in order to create values for each of the one or more parameters that all fall within a common range, wherein the common range is representative of a range from minimum to maximum values for each of the one or more parameters. [0037] In step 206, the output of the GSE assigns a label to the points which are considered ground and a different label to the points which are considered non-ground (i.e., everything above or below the ground surface). FIG. 8 is an example of the output of the GSE 302 for the 3D point cloud of FIG. 7. In FIG. 8, the ground points 304 are shown in a lighter shade or color, while the non-ground points 306 are shown in a darker shade or color. [0038] In step 208, with each point classified as ground and non-ground, only the non-ground points are selected for further processing. The non-ground points are then grouped into clusters based on a point-to-point proximity metric (i.e., non-ground points in close proximity to each other are grouped as part of the same cluster). Clustering algorithms based on a proximity metric are known in the art. Any suitable clustering algorithm may be used to group the non-ground points into clusters. [0039] In step 210, for each cluster, the processor 48 calculates a histogram of the normalized height of the points from the ground surface. For example, the lowest point in the cluster is assigned a value of zero and the highest point in the cluster is assigned a value of 1. All the points with heights between the highest and lowest point are assigned numeric values between zero and 1 in proportion to their height relative to the highest and lowest points. A histogram of the normalized height data is then created.) Even if the combination of Steibner and Ram does not teach: “is a gross defect which requires scrapping or remaking of the component” Sreenivisam teaches: A method and system for defect detection which is a gross defect which requires scrapping or remaking of the component in volumetric imaging (Sreenivisam: [0033] FIG. 7 illustrates a flow chart of a method 80 for in situ detecting of defects and/or particle 60. In a step 82, a first template 18 and substrate 12 may be positioned to define a desired volume therebetween capable of being filled by polymerizable material 34. In a step 84, polymerizable material 34 may be dispensed on substrate 12. In a step 86, source 38 may produce energy 40, e.g., ultraviolet radiation, causing polymerizable material 34 to solidify and/or cross-link conforming to a shape of surface 44 of substrate 12 and patterning surface 22, defining patterned layer 46 on substrate 12. In a step 88, exclusion zone 62 and/or transition zone 64 may be identified in patterned layer 46. For example, positioning of exclusion zone 62 and/or transition zone 64 may be identified using imaging system 70. In a step 90, region of interest 66 on the first template 18 may be determined using the position of exclusion zone 62 and/or transition zone 64. In a step 92, the first template 18 may be unloaded from system 10. In a step 94, a second template 18 may be loaded into system 10. In a step 96, the first template 18 may be inspected for defects and/or particle in region of interest 66. Identification of defect may lead to the determination that such particles and/or defects are untreatable and unacceptable for further use, untreatable but acceptable for further use, or treatable. Such a determination may provide for whether to refrain from removing particles 60 and/or defects (i.e. step 98a) or remove particles 60 and/or defects (i.e., step 98b). It should also be noted that some particles 60 and/or defects may be removed while some remain. Particles 60 and/or defects determined to be untreatable and unacceptable for further use may be discarded, as shown in step 102. For particles and/or defects determined to be untreatable but acceptable for further use, second template 18 may be unloaded from system 10 and first template 18 reloaded. Additionally, for treatable particles 60 and/or defects, after removal of the particle 60 and/or defect, second template 18 may be unloaded from system 10 and first template 18 reloaded.) It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the combination of Steibner and Ram for volumetric image analysis of piles for detection and quality determination to further use the technology for discard or removal of gross defects. The determination of obviousness is predicated upon the following findings: the combination of Steibner and Ram leverages volumetric defect detection and workpiece analysis and Sreenivisam’s teachings are directed towards an overall manufacturing process that leverages the overall quality determination to further determine whether pieces should be removed or discarded and would allow for an improvement in the overall industrial applicability for an end-user and be a known improvement. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of the combination of Steibner and Ram, while the teaching of Ram continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of leveraging volumetric image analysis for defect detection in piles. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAHMINA ANSARI whose telephone number is 571-270-3379. The examiner can normally be reached on IFP Flex - Monday through Friday 9 to 5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’NEAL MISTRY can be reached on 313-446-4912. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. TC 2600’s customer service number is 571-272-2600. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600. 2674 October 18, 2025 /TAHMINA N ANSARI/ Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Nov 30, 2023
Application Filed
Oct 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586249
PROCESSING APPARATUS, PROCESSING METHOD, AND STORAGE MEDIUM FOR CALIBRATING AN IMAGE CAPTURE APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586354
TRAINING METHOD, APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR A MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12573083
COMPUTER-READABLE RECORDING MEDIUM STORING OBJECT DETECTION PROGRAM, DEVICE, AND MACHINE LEARNING MODEL GENERATION METHOD OF TRAINING OBJECT DETECTION MODEL TO DETECT CATEGORY AND POSITION OF OBJECT
2y 5m to grant Granted Mar 10, 2026
Patent 12548297
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT BASED ON FEATURE AND DISTRIBUTION CORRELATION
2y 5m to grant Granted Feb 10, 2026
Patent 12524504
METHOD AND DATA PROCESSING SYSTEM FOR PROVIDING EXPLANATORY RADIOMICS-RELATED INFORMATION
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+17.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 868 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month