Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,752

METHOD FOR AUTOMATICALLY SEARCHING FOR AT LEAST ONE TEXTILE PATTERN IN A COMPOSITE MATERIAL REINFORCEMENT

Non-Final OA §101§102
Filed
May 26, 2023
Examiner
OSINSKI, MICHAEL S
Art Unit
2674
Tech Center
2600 — Communications
Assignee
SAFRAN
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
466 granted / 619 resolved
+13.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. The following Office action is in response to communications filed on 5/26/2023. Claims 1-8 and 10-14 are currently pending within this application. Information Disclosure Statement 2. The information disclosure statement(s) (IDS) submitted on 5/26/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Foreign Priority 3. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 for claiming foreign priority to application FR 2012365, filed on 11/30/2020. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-8 and 10-14 are rejected under rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Independent claim(s) 1 is/are directed towards a method which is a recognized statutory category of invention. Step 2A, Prong One: The above mentioned independent claim(s) recites the abstract ideas of mental processes which are concepts performed in the human mind (including an observation, evaluation, judgement, and opinion). For example, “automatically searching for at least one given textile pattern in a reinforcement of composite material including a plurality of textile patterns, each textile pattern comprising a plurality of reinforcing yarns arranged according to a textile topology” and “searching…for the given textile pattern in the three-dimensional image acquired” encompasses observing an image/data and performing an evaluation/making a determination regarding the contents of the image in comparison with other image/data which may be practically performed in the human mind using observation, evaluation, judgement, and opinion which falls within the “mental process” grouping of abstract ideas. Step 2A, Prong Two: The abstract ideas, as claimed, are not integrated into a practical application and thus do not provide an inventive concept. The above mentioned independent claim(s) recite additional elements of “an artificial neural network” which is recited at a high level of generality and amount to no more than components that apply/execute the abstract ideas without limiting how they function and thus can be performed by any generic computer capable of applying the abstract ideas and are at best the equivalent of merely adding the words “apply it” to the judicial exception. Additionally, “acquiring a three-dimensional image of the reinforcement of composite material” and “trained on a training database to detect the given textile pattern in a three-dimensional image of the reinforcement of composite material” are mere data gathering and input/output activities recited at a high level of generality, thus are insignificant extra-solution activities. Step 2B: As explained in Step 2A, Prong Two above, the independent claims recite additional elements of “an artificial neural network” recited at a high level of generality such that they amount to no more than generic components to implement the abstract idea on a conventional computer, and “acquiring a three-dimensional image of the reinforcement of composite material” and “trained on a training database to detect the given textile pattern in a three-dimensional image of the reinforcement of composite material” are insignificant extra-solution data gathering and input/output activities which are well-understood, routine, and conventional activities that even when considered in combination represent instructions to apply an exception and insignificant extra-solution activity which cannot provide an inventive concept. Even when considered in combination, the additional elements represent mere instructions to apply the judicial exceptions and insignificant extra-solution activities which cannot provide an inventive concept. The claim does not point to a specific improvement in computers in their communication role or provides a specific improvement in the way computers operate (See MPEP 2016.05(g), MPEP 2106.05(d), and Berkheimer Memo). Therefore, based on the above analysis in conjunction with the 2019 Revised Patent Subject Matter Eligibility Guidance, it is determined that the independent claim(s) are directed towards ineligible subject matter of an abstract idea without significantly more. Dependent claims 2-8 and 10-14 are also rejected for being directed towards the abstract idea(s) of mental processes as well as insignificant pre-solution data gathering and post-solution data input/output activities that are ineligible subject matter of an abstract idea without adding significantly more than the judicial exceptions present within the independent claims. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 5. Claims 1-8 and 10-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Muhammad (“Deep Learning Based Semantic Segmentation os UCT Images for Creating Digital Material Twins of Fibrous Reinforcements) [hereafter Muhammad]. 6. As to claim 1, Muhammad discloses a method for automatically searching for at least one given textile pattern (segmentation of patterns of materials within weave glass and carbon fabrics as shown in Figure 1) in a reinforcement of composite material (weave glass and carbon fabrics) including a plurality of textile patterns, each textile pattern comprising a plurality of reinforcing yarns (fiber tows) arranged according to a textile topology the method comprising: acquiring a three-dimensional image of the reinforcement of composite material (uCT 3D images), and searching, using an artificial neural network (CNN Model as shown in Figures 2 and 3) trained on a training database (collection of real and virtual raw and labeled specimen images used for training) to detect the given textile pattern in a three-dimensional image of the reinforcement of composite material, for the given textile pattern in the three-dimensional image acquired (generating a digital material twins image incorporating the corresponding textile pattern of the fiber tows in the uCT 3D image) (Abstract: ‘A deep convolutional neural network (DCNN) was implemented and used to segment μCT images of two different types of reinforcement (2D glass and 3D carbon). The DCNN successfully segmented the images based on multi-scale features extracted using data-driven convolutional filters. The network was trained using scanned μCT images, along with images extracted from computer-generated virtual models of the reinforcements. One of the convolutional layers of the trained network was utilized to extract features to be used in creating a machine learning-based model. The extracted features and the raw gray-scale data were used to train a supervised k-nearest neighbor (k-NN) model for pixel-wise classification. The performance of both approaches was evaluated by comparing the results with manually segmented images. The trained deep neural network was able to provide faster and superior predictions of different features of the reinforcements as compared to the conventional machine learning approach’, Section 1: ‘A digital twin is a broad concept in Industry 4.0 and smart manufacturing that refers to a high-fidelity virtual representation of physical manufacturing elements, which is continuously updated as the physical counterpart changes in a synchronous manner [5], [6]. In the context of reinforcement characterization, digital material twins are meso-scale, voxel-based realistic computational models, in which each voxel is assigned a material label with associated properties hence, preserving the spatial variability of the geometric and material properties... In generating digital material twins from μCT images for composites manufacturing, the segmentation step is very important in correctly identifying different features and phases within the reinforcement, which have a direct impact on the accuracy of the numerical predictions of the reinforcement properties. For the case of reinforcements, such as prepregs and dry fabrics (2D and 3D fabrics), not only fiber, matrix and voids need to be distinguished from each other, but also fiber orientations (weft, warp, etc.) need to be identified… This paper outlines a new procedure for using DCNN for the segmentation of μCT images of fiber reinforcements to generate digital material twins for virtual characterization of advanced composite materials. A DCNN, DeepLab v3+ network with ResNet18 as the backbone, was implemented in MATLAB® Deep Learning Toolbox and was used in this study. For training and evaluation of the network, the images have been extracted from both real and computer-generated virtual specimens. The virtual specimens are the computer-generated geometrical representations, whereas the real specimens are raw 3D volumes reconstructed from µCT images. Section 2.1: ‘A 2D plain woven glass fabric and a 3D orthogonal woven carbon fabric were used in this work. The images from both real and virtual specimens of both fabrics are shown in Fig. 1. The virtual specimens are the ideal representation of real specimens, and can be generated using any commercial textile software such as, TexGen, WiseTex or WeaveGeo [19]. The intensity images for the virtual specimens were extracted from their respective 3D geometrical models. The original images extracted from the models were already segmented as the tows were predefined within the 3D model… The real µCT images were obtained from real fabric samples using the GE Phoenix Nanotom laboratory-sized µCT machine [21]. During the scanning process, the applied voltage on the X-ray tube was 120 kV with a current of 200 μA. A total number of 3600 projections, with a size of 2400 × 1600 pixels, were captured at a resolution of 25 μm. The tomographic projections were reconstructed into a 3D volume from which a region of interest, equivalent to 4 representative volume elements, was extracted. The regions of interest had a volume of [1024 × 1024 × 35] and [400 × 400 × 200] for the 2D and 3D fabrics respectively.’ Section 2.2: ‘To create digital material twins, the μCT images were segmented using deep learning based semantic segmentation technique. In this approach, a deep convolutional neural network or DCNN was trained with raw and labeled images and used for semantic image segmentation. A labeled image is an image where every pixel has been assigned a categorical label. This technique is capable of not only isolating the different phases present within the raw volume, but also identifying fibers and fiber tows orientated in different directions. Fig. 2(a) shows the overview and highlights the utility of this methodology for creating digital material twins. As highlighted in Fig. 2(a), the images extracted from computer-generated virtual specimens were also used along with µCT images of real specimens… a DCNN consists of several convolutional layers, non-linear activations, batch normalization, and pooling layers with the convolutional layer being the core building block. The convolutional layer performs the convolutional operation on input images to extract textural and morphological information in the form of a feature map. The non-linear activations such as, rectified linear unit (ReLU) function incorporates non-linearity by mapping values below a predefined threshold (generally zero) to zero… By using multiple filters in the same layer, a number of features can be simultaneously extracted. Fig. 2(c) shows an example of feature maps extracted from the input image (highlighted in the red box) by a single layer. These feature maps are successively processed by each layer in the network producing the segmentation maps in the final layer. Although, the size and total number of filters in a convolutional layer is predefined, the elements of the filters, known as weights, are determined or “learned” during the training process. The training process of a DCNN is mathematically an optimization problem in which its weights are determined to minimize a loss function by using specially developed algorithms.’ Section 2.3: ‘Deep convolutional neural network, such as DeepLab v3+ network in MATLAB® Deep Learning Toolbox can be used and it can be any DCNN for image classification task in computer vision [26]. In this study, a deep neural network, having an encoder-decoder architecture, was built on the top of a backbone, and we choose ResNet18 as the backbone, which is a DCNN of 18 layers with identity shortcut connection to enable residual learning [27], as shown in Fig. 3. The feature map output from this backbone is then diverted and passed to two modules; one is taken as the input of the Atrous Spatial Pyramid Pooling (ASPP) module to probe and fuse multi-scale contextual information under different dilation rates; the other is passed directly to the decoder and concatenated with the up-sampled features obtained from the ASPP module… The network was trained by using data from both the virtual and real specimens. Labeled image datasets were generated manually using MATLAB® Image Labeler Tool. These manually-segmented images serve as the “ground truth” to evaluate the images segmented by the trained network. Four separate networks were trained by using each data set. The sizes of a single image were 1024 × 1024 and 400 × 400 for the 2D and 3D fabrics respectively…Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’ Section 3.1: ‘The trained models were used to predict the phases of each pixel of the test images, which belonged to the same data set used for training the networks, and the predictions were compared against manually segmented images (ground truth). Overall, the feature predictions were found to be very good, only pixels along the edges were misclassified, as can be seen in Fig. 4(a)–(b). The identification of tow contours and boundaries is very important but challenging in fiber reinforcements. However, using the current procedure, the tow boundaries can be extracted with significant accuracy. Examples of the tow boundaries extracted from the segmented images are shown below in Fig. 4(c)–(d). ’ Section 4: ‘In this work, a novel deep convolutional neural network and a conventional machine learning algorithm were used for the semantic segmentation of μCT images of fiber reinforcements. The performances of both the approaches were evaluated by comparing their ability of feature detection and comparing with the ground-truth images extracted from both real and virtual specimens. The predicted results by the trained deep learning network showed a higher level of accuracy as compared to the machine learning algorithm.’). 7. As to claim 2, Muhammad discloses the artificial neural network is a multilayer perceptron or a convolutional artificial neural network (Abstract: ‘A deep convolutional neural network (DCNN) was implemented and used to segment μCT images of two different types of reinforcement (2D glass and 3D carbon). The DCNN successfully segmented the images based on multi-scale features extracted using data-driven convolutional filters. The network was trained using scanned μCT images, along with images extracted from computer-generated virtual models of the reinforcements. One of the convolutional layers of the trained network was utilized to extract features to be used in creating a machine learning-based model. The extracted features and the raw gray-scale data were used to train a supervised k-nearest neighbor (k-NN) model for pixel-wise classification. The performance of both approaches was evaluated by comparing the results with manually segmented images. The trained deep neural network was able to provide faster and superior predictions of different features of the reinforcements as compared to the conventional machine learning approach.’ Section 2.2: ‘To create digital material twins, the μCT images were segmented using deep learning based semantic segmentation technique. In this approach, a deep convolutional neural network or DCNN was trained with raw and labeled images and used for semantic image segmentation. A labeled image is an image where every pixel has been assigned a categorical label. This technique is capable of not only isolating the different phases present within the raw volume, but also identifying fibers and fiber tows orientated in different directions. Fig. 2(a) shows the overview and highlights the utility of this methodology for creating digital material twins. As highlighted in Fig. 2(a), the images extracted from computer-generated virtual specimens were also used along with µCT images of real specimens…. Usually, a DCNN consists of several convolutional layers, non-linear activations, batch normalization, and pooling layers with the convolutional layer being the core building block. The convolutional layer performs the convolutional operation on input images to extract textural and morphological information in the form of a feature map. The non-linear activations such as, rectified linear unit (ReLU) function incorporates non-linearity by mapping values below a predefined threshold (generally zero) to zero. The pooling layer simplifies the input by replacing them with a summary statistic of its neighborhood.’ Section 2.3: ‘Deep convolutional neural network, such as DeepLab v3+ network in MATLAB® Deep Learning Toolbox can be used and it can be any DCNN for image classification task in computer vision [26]. In this study, a deep neural network, having an encoder-decoder architecture, was built on the top of a backbone, and we choose ResNet18 as the backbone, which is a DCNN of 18 layers with identity shortcut connection to enable residual learning [27], as shown in Fig. 3.’ ). 8. As to claim 3, Muhammad discloses the artificial neural network is trained in a supervised manner (using ground truth and labeled training data) and the training database includes, for each training composite material of a plurality of training composite materials, a three-dimensional image of the reinforcement of the training composite material (as shown in Figure 2a), and for each textile pattern to be detected, the textile topology of the textile pattern and the location of the textile pattern in the three-dimensional image (Abstract: ‘The network was trained using scanned μCT images, along with images extracted from computer-generated virtual models of the reinforcements. One of the convolutional layers of the trained network was utilized to extract features to be used in creating a machine learning-based model.’ Section 2.2: ‘To create digital material twins, the μCT images were segmented using deep learning based semantic segmentation technique. In this approach, a deep convolutional neural network or DCNN was trained with raw and labeled images and used for semantic image segmentation. A labeled image is an image where every pixel has been assigned a categorical label. This technique is capable of not only isolating the different phases present within the raw volume, but also identifying fibers and fiber tows orientated in different directions. Fig. 2(a) shows the overview and highlights the utility of this methodology for creating digital material twins. As highlighted in Fig. 2(a), the images extracted from computer-generated virtual specimens were also used along with µCT images of real specimens.’ Section 2.3: ‘The network was trained by using data from both the virtual and real specimens. Labeled image datasets were generated manually using MATLAB® Image Labeler Tool. These manually-segmented images serve as the “ground truth” to evaluate the images segmented by the trained network. Four separate networks were trained by using each data set. The sizes of a single image were 1024 × 1024 and 400 × 400 for the 2D and 3D fabrics respectively. The size of each slice for the 2D fabric was larger than the 3D fabric, since the samples used had wider and thicker tows. The total number of images were 35 and 200 for the 2D and 3D fabrics respectively. The images were randomly divided into the training, validation and test sets. The training set was comprised of 60% of the raw and labeled images, and the remaining images were equally divided into validation and test sets. No data augmentation was performed on the training images. The stochastics gradient descent with momentum (SGDM) algorithm was used for optimization with a mini batch size of 5 and initial learning rate of 0.001. The maximum number of epochs was set to be 25 for all cases. Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory. The time required for training the networks was approximately 30 min and 2 h for the virtual and real specimens respectively. More time was required for the real specimens, due to the presence of noise and artifacts in the real images. During training, the network converged exponentially with very similar training and validation curves due to highly correlated training and validation set.’). 9. As to claim 4, Muhammad discloses wherein the textile topology of each textile pattern to be detected is manually obtained, using mathematical morphology algorithms, an artificial neural network or dedicated software (Section 1: ‘The novelty of this work lies in the fact that a mixture of a real and virtual image dataset was used, and the method is also applicable to μCT images with low resolution and poor contrast. For purposes of comparison, an alternative route using a conventional supervised machine learning approach has also been utilized by leveraging the feature extraction capabilities of the trained deep neural network. The performance of both the approaches has been evaluated by comparing the images segmented by these trained machine and deep learning models with the manually segmented images, referred here as the “ground-truth”.’ Section 2.1: ‘The use of virtual specimen has two noticeable advantages; (a) manual labeling of the images can be reduced significantly, and (b) it can be used to generate a diversified training data set.’ Section 2.3: ‘The network was trained by using data from both the virtual and real specimens. Labeled image datasets were generated manually using MATLAB® Image Labeler Tool. These manually-segmented images serve as the “ground truth” to evaluate the images segmented by the trained network. Four separate networks were trained by using each data set. The sizes of a single image were 1024 × 1024 and 400 × 400 for the 2D and 3D fabrics respectively. The size of each slice for the 2D fabric was larger than the 3D fabric, since the samples used had wider and thicker tows. The total number of images were 35 and 200 for the 2D and 3D fabrics respectively. The images were randomly divided into the training, validation and test sets. The training set was comprised of 60% of the raw and labeled images, and the remaining images were equally divided into validation and test sets.’ Section 3.1: ‘The trained models were used to predict the phases of each pixel of the test images, which belonged to the same data set used for training the networks, and the predictions were compared against manually segmented images (ground truth). Overall, the feature predictions were found to be very good, only pixels along the edges were misclassified, as can be seen in Fig. 4(a)–(b).’). 10. As to claim 5, Muhammad discloses the three-dimensional image is acquired by X-ray tomography or by transmission electron microscope (Section 2.1: ‘The real µCT images were obtained from real fabric samples using the GE Phoenix Nanotom laboratory-sized µCT machine [21]. During the scanning process, the applied voltage on the X-ray tube was 120 kV with a current of 200 μA. A total number of 3600 projections, with a size of 2400 × 1600 pixels, were captured at a resolution of 25 μm. The tomographic projections were reconstructed into a 3D volume from which a region of interest, equivalent to 4 representative volume elements, was extracted. The regions of interest had a volume of [1024 × 1024 × 35] and [400 × 400 × 200] for the 2D and 3D fabrics respectively.’). 11. As to claim 6, Muhammad discloses a method for automatically reconstructing (segmenting) a textile geometry of a reinforcement of composite material including a plurality of textile patterns, the method comprising performing the method according to claim 1 (as described above with respect to the comments/citations of claim 1) for each textile pattern of the reinforcement of composite material, to obtain a number of detections of each textile pattern and a location of each detection in the reinforcement of composite material (as shown in Figures 2-3 with the output of the CNN being a segmented and labeled digital material twin) (Abstract: ‘In this study, a novel approach of processing μCT images to create digital material twins is presented. A deep convolutional neural network (DCNN) was implemented and used to segment μCT images of two different types of reinforcement (2D glass and 3D carbon). The DCNN successfully segmented the images based on multi-scale features extracted using data-driven convolutional filters. The network was trained using scanned μCT images, along with images extracted from computer-generated virtual models of the reinforcements. One of the convolutional layers of the trained network was utilized to extract features to be used in creating a machine learning-based model.’ Section 1: ‘ In the context of reinforcement characterization, digital material twins are meso-scale, voxel-based realistic computational models, in which each voxel is assigned a material label with associated properties hence, preserving the spatial variability of the geometric and material properties. These models are better suited for Industry 4.0, as they are data-driven, realistic, representatives of geometric and material variability, and allow to perform numerical simulations to determine material properties such as, predictions of resin flow or composite mechanical properties [7]. The μCT images provide valuable information about the state of the material in 3D, such as several morphological features, the evolution of fiber deformation/orientation during manufacturing, nesting of fiber tows and meso-scopic deformations during compaction… This paper outlines a new procedure for using DCNN for the segmentation of μCT images of fiber reinforcements to generate digital material twins for virtual characterization of advanced composite materials. A DCNN, DeepLab v3+ network with ResNet18 as the backbone, was implemented in MATLAB® Deep Learning Toolbox and was used in this study. For training and evaluation of the network, the images have been extracted from both real and computer-generated virtual specimens. The virtual specimens are the computer-generated geometrical representations, whereas the real specimens are raw 3D volumes reconstructed from µCT images. The novelty of this work lies in the fact that a mixture of a real and virtual image dataset was used, and the method is also applicable to μCT images with low resolution and poor contrast. For purposes of comparison, an alternative route using a conventional supervised machine learning approach has also been utilized by leveraging the feature extraction capabilities of the trained deep neural network. The performance of both the approaches has been evaluated by comparing the images segmented by these trained machine and deep learning models with the manually segmented images, referred here as the “ground-truth”.’ Section 2.2: ‘To create digital material twins, the μCT images were segmented using deep learning based semantic segmentation technique. In this approach, a deep convolutional neural network or DCNN was trained with raw and labeled images and used for semantic image segmentation. A labeled image is an image where every pixel has been assigned a categorical label. This technique is capable of not only isolating the different phases present within the raw volume, but also identifying fibers and fiber tows orientated in different directions. Fig. 2(a) shows the overview and highlights the utility of this methodology for creating digital material twins. As highlighted in Fig. 2(a), the images extracted from computer-generated virtual specimens were also used along with µCT images of real specimens.’ Section 2.3: ‘After the soft-max activation on the concatenated feature maps and one more up-sampling layer to adjust the image size, the final segmentation results are achieved in terms of the probabilities of each pixel belonging to either matrix, warp or weft tows.’). 12. As to claim 7, Muhammad discloses a method for automatically inspecting a textile geometry of a reinforcement of composite material, the method comprising performing the reconstruction method according to claim 6 (as described above with respect to the comments/citations of claim 1) to obtain a reconstruction (segmentation within digital material twin) of the textile geometry of the reinforcement of composite material, and comparing the reconstruction of the textile geometry of the reinforcement of composite material with a theoretical textile geometry (ground truth) (Section 1: ‘This paper outlines a new procedure for using DCNN for the segmentation of μCT images of fiber reinforcements to generate digital material twins for virtual characterization of advanced composite materials. A DCNN, DeepLab v3+ network with ResNet18 as the backbone, was implemented in MATLAB® Deep Learning Toolbox and was used in this study. For training and evaluation of the network, the images have been extracted from both real and computer-generated virtual specimens. The virtual specimens are the computer-generated geometrical representations, whereas the real specimens are raw 3D volumes reconstructed from µCT images. The novelty of this work lies in the fact that a mixture of a real and virtual image dataset was used, and the method is also applicable to μCT images with low resolution and poor contrast. For purposes of comparison, an alternative route using a conventional supervised machine learning approach has also been utilized by leveraging the feature extraction capabilities of the trained deep neural network. The performance of both the approaches has been evaluated by comparing the images segmented by these trained machine and deep learning models with the manually segmented images, referred here as the “ground-truth”.’ Section 2.3: ‘The network was trained by using data from both the virtual and real specimens. Labeled image datasets were generated manually using MATLAB® Image Labeler Tool. These manually-segmented images serve as the “ground truth” to evaluate the images segmented by the trained network. Four separate networks were trained by using each data set. The sizes of a single image were 1024 × 1024 and 400 × 400 for the 2D and 3D fabrics respectively. The size of each slice for the 2D fabric was larger than the 3D fabric, since the samples used had wider and thicker tows. The total number of images were 35 and 200 for the 2D and 3D fabrics respectively. The images were randomly divided into the training, validation and test sets. The training set was comprised of 60% of the raw and labeled images, and the remaining images were equally divided into validation and test sets. No data augmentation was performed on the training images. The stochastics gradient descent with momentum (SGDM) algorithm was used for optimization with a mini batch size of 5 and initial learning rate of 0.001. The maximum number of epochs was set to be 25 for all cases. Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory. The time required for training the networks was approximately 30 min and 2 h for the virtual and real specimens respectively. More time was required for the real specimens, due to the presence of noise and artifacts in the real images. During training, the network converged exponentially with very similar training and validation curves due to highly correlated training and validation set.’). 13. As to claim 8, Muhammad discloses A calculator configured to implement the method according to claim 1 (Section 2.3: ‘Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’). 14. As to claim 10, Muhammad discloses A non-transitory computer-readable recording medium comprising instructions which, when executed by a computer, cause the same to implement the method according to claim 1 (Section 2.3: ‘Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’). 15. As to claim 11, Muhammad discloses A calculator configured to implement the reconstruction method according to claim 6 (Section 2.3: ‘Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’). 16. As to claim 12, Muhammad discloses A calculator configured to implement the inspection method according to claim 7 (Section 2.3: ‘Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’). 17. As to claim 13, Muhammad discloses A non-transitory computer-readable recording medium comprising instructions which, when executed by a computer, cause the same to implement the reconstruction method according to claim 6 (Section 2.3: ‘Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’). 18. As to claim 14, Muhammad discloses A non-transitory computer-readable recording medium comprising instructions which, when executed by a computer, cause the same to implement the inspection method according to claim 7 (Section 2.3: ‘Training was performed on a single NVIDIA Tesla K20c GPU, which has a computing capability of 3.5 GHz with 5 GB memory.’). Conclusion 19. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL S OSINSKI whose telephone number is (571) 270-3949. The examiner can normally be reached on Monday - Friday, 10:00am - 6:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MO /MICHAEL S OSINSKI/Primary Examiner, Art Unit 2674 2/10/2026
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Feb 10, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596951
MULTISCALE CONTIGUOUS BLOCK PIXEL ENTANGLER FOR IMAGE RECOGNITION ON HYBRID QUANTUM-CLASSICAL COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12586351
STORAGE MEDIUM, SPECIFYING METHOD, AND INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579657
IMAGING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573028
NEURAL NETWORK FOR IMAGE REGISTRATION AND IMAGE SEGMENTATION TRAINED USING A REGISTRATION SIMULATOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554796
OPTIMIZING PARAMETER ESTIMATION FOR TRAINING NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
98%
With Interview (+23.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month