DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-12 are pending.
Priority
The present application is a continuation of, and claims benefit under 35 USC 120 to, international application PCT/EP2022/057656, filed March 23, 2022, which claims benefit under 35 USC 119 of German Application No. 10 2021 110 054.2, filed April 21, 2021.
Information Disclosure Statement
The IDS filed 10/16/23, 12/12/24, and 06/04/25 are considered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 7, and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Neumann et al. (“3D analysis of high-aspect ratio features in 3D-NAND” Hereinafter “Neumann”) in view of BRADA et al. (US 20200294239 A1 Hereinafter “BRADA”) in further view of PENG (US 20210182622 A1 Hereinafter “PENG”).
Rejection overview: Neumann teaches a method for defecting detects in high aspect ratio structures. Part of their method is to take a cross section of the structures and perform segmentation to detect the rings of the structure. They are silent on the segmentation process however. BRADA provides a suitable segmentation process, a process that uses two models where the first a has a coarse segmentation and the second does a more refined segmentation using the original image and the coarse segmentation map. PENG provides methods for the coarse and refined segmentation being binary segmentation and multi-level segmentation respectively. So the combination of all three art would be to use BRADA’s two machine models (coarse and refined) for segmentation of Neumann’s high aspect ratio cross section rings, where BRADA’s first model (coarse model) is trained using PENG’s binary annotations (coarse annotation method) and BRADA’s first model produces a binary segmented image (coarse segmentation method), the binary segmented image would then be annotated in a multi-level fashion because BRADA’s second model is trained to generate a more refined segmentation (which in this case is multi-level as taught by PENG), so a multi-level annotated image of the rings would be needed to train it. This allows the BRADA’s second model to generate the multi-level segmented images of Neumann’s rings as claimed.
Regarding claim 1, Neumann teaches a method, comprising:
(Page M-4, Fig. 3: Rings in the cross section of pillars of high aspect ratio structures have been identified);
(Page M-4, Fig. 3: Rings can be seen);
(Page M-4, Fig. 3: “Example of one layer within the reconstructed 3D volume. Channels have been automatically detected and segmented, and their contours have been extracted”. The cross section images of the cross sections of the pillars in the high aspect ratio structures have been segmented, the “or” limitation requires only one of the two listed options be met for a case of obviousness, though it is presumed any further cross section obtained would undergo the same process)(Emphasis added);
(Page M-4, Fig. 3: Rings can be seen); and
(Page M-4, Fig. 3: Rings can be seen).
Neumann does not expressly disclose a segmentation process that uses two models to perform different levels of segmentation.
However, BRADA teaches a segmentation process that uses two models to perform different levels of segmentation ([0022]: “According to various embodiments, a segmentation map (also referred to as a probability map, feature map, etc.) is created in which each pixel may be marked with a vector identifying a most-likely category of the pixel through a two-stage segmentation process. During a first stage, the system herein may input the initial image (e.g., a three-channel RGB image, a single-channel gray scale image, depending on the nature of the problem at hand, etc.) into a U-net architecture that creates a rough segmentation map for the pixels of the image. The U-net may be trained using real image data of the content being analyzed, or on a mixture of real data and synthetic data. Next, a second stage of the segmentation process may refine the initially generated segmentation map using a refinement network (also referred to herein as a cascaded network) that is trained based on synthetic (e.g., artificially created) images. An enhanced segmentation map may be output from the refinement network and used to create a segmented image for display”. He first stage uses the first machine model to generate a rough segmentation map and the second stage uses another model to generate a refined segmentation map).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify Neumann to include BRADA is expressly provided by BRADA, stating that use of two models in this way generates a segmentation with improved accuracy ([0053]: “executing of the refinement predictive model may significantly improve the accuracy of the initial segmentation map”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method with the motivation of improving segmentation accuracy. The person of ordinary skill in the art would have recognized the benefit of improved segmentation accuracy.
The combination of Neumann and BRADA does not expressly disclose the first model
being trained with binary segmentation annotation data and performing binary segmentation and the second model being trained with multi-level annotation data and performing multi-level segmentation.
However, PENG teaches a coarse annotation and segmentation being binary and a more refined annotation and segmentation being multi-level ([0101]: “For example, if the model is to be trained for segmenting bone from non-bone material, the annotation is required to differentiate and identify the pixels/voxels of bone from those of non-bone and thus comprise segmentation data; if the model goes further such that it is necessary to identify each piece of bone, the annotation should also differentiate and label the pixels/voxels of each bone and thus additionally include identification data”. PENG teaches the concept of coarse and more refined segmentation, where the simple segmentation is binary (bone vs non-bone) and the refined is multi-level (identifying each bone). BRADA additionally teaches the need to use two models to perform a coarse and more refined version of segmentation, so using PENG’s definition of coarse and refined segmentation, it would result in two machine models where the first model would be trained using binary data to perform binary segmentation and the second model would be trained using multi-level annotation data to perform multi-level segmentation. If these two models are used for Neumann’s rings of cross sections of high aspect ratio structures, it would result in the claimed language. Hence, the combination of Neumann, BRADA, and PENG teaches the claimed limitations not taught by the combination of Neumann and BRADA as described previously).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann and BRADA’s two model segmentation system to include PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, the combination of Neumann and BRADA’s two model segmentation system as modified by PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation can yield a predictable result of two models for segmentation, where the first model performs binary segmentation and the second performs multi-level segmentation for overall improved segmentation since the combination of Neumann and BRADA already teach a two model segmentation proves using a coarse and a more refined model, PENG simply just defines methods that can be used for the coarse segmentation (binary segmentation) and more refined segmentation (multi-level segmentation). Thus, a person of ordinary skill would have appreciated including in the combination of Neumann and BRADA’s two model segmentation system the ability to do PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 3, the combination of Neumann, BRADA, and PENG teaches the method of claim 1, in addition, BRADA further teaches wherein the second machine learning logic comprises a neural network ([0005]: “The refinement network may include a neural network that is trained”).
The rationale for this combination is similar to the rationale for the claim 1 combination
for BRADA due to the method of combination and benefits being similar (For BRADA’s two model system implemented in claim 1. The second model would be implemented as a neural network since BRADA describes the second machine model as a neural network).
Regarding claim 7, the combination of Neumann, BRADA, and PENG teaches the method claim 1, in addition, Neumann further teaches
While PENG further teaches wherein binary annotating is performed ([0101]: “For example, if the model is to be trained for segmenting bone from non-bone material, the annotation is required to differentiate and identify the pixels/voxels of bone from those of non-bone and thus comprise segmentation data; if the model goes further such that it is necessary to identify each piece of bone, the annotation should also differentiate and label the pixels/voxels of each bone and thus additionally include identification data”. If the binary segmentation operation was used for Neumann’s rings, it would result in the binary annotation being used for a portion of each of the rings. Hence the combination of Neumann, BRADA, and PENG teaches the claimed language).
The rationale for this combination is similar to the rationale for the claim 1 combination
for PENG due to the method of combination and benefits being similar (For PENG’s binary annotation implemented in claim 1. The binary annotation is being performed on Neumann’s rings, so there would be binary processing for a portion of each of the rings).
Regarding claim 11, the combination of Neumann, BRADA, and PENG teaches the method of claim 1, in addition, Neumann further teaches further comprising determining parameters of the rings based on the segmented rings (Page M-4, Section 4: “For each channel in each layer, we determine radius and ellipticity from the extracted contours”. The radius and ellipticity act as the parameters. These extracted contours are obtained from the segmented rings “Example of one layer within the reconstructed 3D volume. Channels have been automatically detected and segmented, and their contours have been extracted” (Fig. 3)).
Regarding claim 12, the combination of Neumann, BRADA, and PENG teaches the method of claim 11, in addition, Neumann further teaches further comprising identifying contours of the rings based on the segmented rings, wherein determining the parameters is based on the identified contours (Page M-4, Section 4: “For each channel in each layer, we determine radius and ellipticity from the extracted contours”. The radius and ellipticity act as the parameters. These extracted contours are obtained from the segmented rings “Example of one layer within the reconstructed 3D volume. Channels have been automatically detected and segmented, and their contours have been extracted” (Fig. 3)).
Claims 2 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Neumann et al. (“3D analysis of high-aspect ratio features in 3D-NAND” Hereinafter “Neumann”) in view of BRADA et al. (US 20200294239 A1 Hereinafter “BRADA”) in further view of PENG (US 20210182622 A1 Hereinafter “PENG”) in further view of CONG et al. (CN 202010079085 Hereinafter “CONG”).
Regarding claim 2, the combination of Neumann, BRADA, and PENG teaches the method of claim 1, in addition BRADA further teaches wherein the first machine learning logic ([0022]: “During a first stage, the system herein may input the initial image (e.g., a three-channel RGB image, a single-channel gray scale image, depending on the nature of the problem at hand, etc.) into a U-net architecture that creates a rough segmentation map for the pixels of the image”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify Neumann to include BRADA is expressly provided by BRADA, stating that use of two models in this way generates a segmentation with improved accuracy ([0053]: “executing of the refinement predictive model may significantly improve the accuracy of the initial segmentation map”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method with the motivation of improving segmentation accuracy. The person of ordinary skill in the art would have recognized the benefit of improved segmentation accuracy.
The combination of Neumann, BRADA, and PENG does not expressly disclose the first machine learning logic comprises a random forest model.
However, CONG teaches a random forest model for binary segmentation (Page 7, step 3.1: “classifying a new pair of ultrasonic images by using the random forest model trained in the step 2 to realize binary segmentation”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to substitute the combination of Neumann, BRADA, and PENG’s first model that performs binary segmentation with CONG’s random forest model that performs binary segmentation because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, CONG’s random forest model teaches that random forest models can be used for binary segmentation of images, and one of ordinary skill in the art would expect similar effects if substituted for the combination of Neumann, BRADA, and PENG’s first model that performs binary segmentation
Regarding claim 8, the combination of Neumann, BRADA, and PENG teaches the method of claim 1, in addition BRADA further teaches wherein the first machine learning logic ([0022]: “During a first stage, the system herein may input the initial image (e.g., a three-channel RGB image, a single-channel gray scale image, depending on the nature of the problem at hand, etc.) into a U-net architecture that creates a rough segmentation map for the pixels of the image”), and the second machine learning logic comprises a neural network ([0005]: “The refinement network may include a neural network that is trained”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify Neumann to include BRADA is expressly provided by BRADA, stating that use of two models in this way generates a segmentation with improved accuracy ([0053]: “executing of the refinement predictive model may significantly improve the accuracy of the initial segmentation map”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method with the motivation of improving segmentation accuracy. The person of ordinary skill in the art would have recognized the benefit of improved segmentation accuracy.
The combination of Neumann, BRADA, and PENG does not expressly disclose the first machine learning logic comprises a random forest model.
However, CONG teaches a random forest model for binary segmentation (Page 7, step 3.1: “classifying a new pair of ultrasonic images by using the random forest model trained in the step 2 to realize binary segmentation”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to substitute the combination of Neumann, BRADA, and PENG’s first model that performs binary segmentation with CONG’s random forest model that performs binary segmentation because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, CONG’s random forest model teaches that random forest models can be used for binary segmentation of images, and one of ordinary skill in the art would expect similar effects if substituted for the combination of Neumann, BRADA, and PENG’s first model that performs binary segmentation
Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Neumann et al. (“3D analysis of high-aspect ratio features in 3D-NAND” Hereinafter “Neumann”) in view of BRADA et al. (US 20200294239 A1 Hereinafter “BRADA”) in further view of PENG (US 20210182622 A1 Hereinafter “PENG”) in further view of Liba et al. (US 20220230323 A1 Hereinafter “Liba”).
Regarding claim 4, the combination of Neumann, BRADA, and PENG teaches the method of claim 1, further comprising
While PENG does disclose the idea of using corrected data for training a model ([0115]: “if the annotator is satisfied with only a part (but not all) of preliminary result 110, the annotator moves the satisfactory part only of preliminary result 110 into annotated image window 104, then uses annotation tools 106 of annotated image window 104 to correct or complete segmentation and identification”. The corrected segmentation image acts as a ground truth image, “When the annotator has either completed the annotation of a partially satisfactory preliminary result 110 or annotated the image him- or herself, the set of annotated original images 114 are now in annotated image window 104, and are stored as ground truth images 66” [0117]. These ground truth images are used to train a model “After the one or more ground truth images 66 are collected in this manner, a segmentation and identification model is trained using the annotated original images, that is, ground truth images 66”), the combination of Neumann, BRADA, and PENG does not expressly disclose re-training the first model with it.
However, Liba teaches re-training a model corrected segmentation data ([0056]: “The inference results that are not accurately segmented by the machine-learned model 200 can be manually or computationally segmented again and the machine-learned model 200 can be re-trained with the corrected images”).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann, BRADA, and PENG’s first model to include Liba’s re-training of a model using corrected image data from the model because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, Liba’s re-training of a model using corrected image data from the model permits improved model using improved training data “The active-learning pipeline can be executed again, using additional training data, to improve the machine-learned model 200” ([0056]). This known benefit in Liba is applicable to the combination of Neumann, BRADA, and PENG’s first model as they both share characteristics and capabilities, namely, they are directed to models trained to segment images. Therefore, it would have been recognized that modifying the combination of Neumann, BRADA, and PENG’s first model to include Liba’s re-training of a model using corrected image data from the model would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate Liba’s re-training of a model using corrected image data from the model in models trained to segment images and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Regarding claim 5, the combination of Neumann, BRADA, PENG, and Liba teaches the
method of claim 4, in addition, BRADA further teaches wherein (Fig. 3, the data output from the first model that will be annotated (230) is generated after the first model is trained. This is an aspect of their 2 model process).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify Neumann to include BRADA is expressly provided by BRADA, stating that use of two models in this way generates a segmentation with improved accuracy ([0053]: “executing of the refinement predictive model may significantly improve the accuracy of the initial segmentation map”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Neumann’s segmentation process to include BRADA’s 2 model segmentation method with the motivation of improving segmentation accuracy. The person of ordinary skill in the art would have recognized the benefit of improved segmentation accuracy.
PENG further teaches this annotation that acts as training data being multi-level ([0101]: “For example, if the model is to be trained for segmenting bone from non-bone material, the annotation is required to differentiate and identify the pixels/voxels of bone from those of non-bone and thus comprise segmentation data; if the model goes further such that it is necessary to identify each piece of bone, the annotation should also differentiate and label the pixels/voxels of each bone and thus additionally include identification data”. PENG teaches the concept of simple and more refined segmentation, where the simple segmentation is binary (bone vs non-bone) and the refined is multi-level (identifying each bone). BRADA additionally teaches the need to use two models to perform a coarse and more refined version of segmentation, so using PENG’s definition of coarse and refined segmentation, it would result in two machine models where the first model would be trained using binary data to perform binary segmentation and the second model would be trained using multi-level annotation data to perform multi-level segmentation. If these two models are used for Neumann’s rings of cross sections of high aspect ratio structures, it would result in the claimed language. Hence, the combination of Neumann, BRADA, and PENG teaches the claimed limitations not taught by the combination of Neumann and BRADA as described previously).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann and BRADA’s two model segmentation system to include PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, the combination of Neumann and BRADA’s two model segmentation system as modified by PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation can yield a predictable result of two models for segmentation, where the first model performs binary segmentation and the second performs multi-level segmentation for overall improved segmentation since the combination of Neumann and BRADA already teach a two model segmentation proves using a coarse and a more refined model, PENG simply just defines methods that can be used for the coarse segmentation (binary segmentation) and more refined segmentation (multi-level segmentation). Thus, a person of ordinary skill would have appreciated including in the combination of Neumann and BRADA’s two model segmentation system the ability to do PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Liba further teaches re-training a model corrected segmentation data ([0056]: “The inference results that are not accurately segmented by the machine-learned model 200 can be manually or computationally segmented again and the machine-learned model 200 can be re-trained with the corrected images”. With the current combination, the data that is sent to BRADA’s model that is to be multi-level annotated would only be output after the re-training has taken place in the first model, hence, the combination of Neumann, BRADA, PENG, and Liba teaches the idea of multi-level annotation only taking place after the retraining step).
The rationale for combination is similar to the one mentioned in the claim 4 rejection, due to the annotation being already based off the output of the first model (as taught in BRADA Fig. 3) and requiring that the image is multi-level annotated for training the second model (as taught by PENG). So Liba teaching that the first model is going to be re-trained teaches that the annotation will only take place after the re-training has taken place. Hence, there is similar methods of combination and benefits to the claim 4 combination for Liba.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Neumann et al. (“3D analysis of high-aspect ratio features in 3D-NAND” Hereinafter “Neumann”) in view of BRADA et al. (US 20200294239 A1 Hereinafter “BRADA”) in further view of PENG (US 20210182622 A1 Hereinafter “PENG”) in further view of SHI et al. (US 20210339743 A1 Hereinafter “SHI”).
Regarding claim 6, the combination of Neumann, BRADA, and PENG teaches the method claim 1, in addition, BRADA further teaches wherein:
training the second machine learning logic is based on a first part of the ([0022]: “Next, a second stage of the segmentation process may refine the initially generated segmentation map using a refinement network (also referred to herein as a cascaded network) that is trained based on synthetic (e.g., artificially created) images”);
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann, BRADA, and PENG’s 2 model process to include BRADA’s training of the second model because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, BRADA’s training of the second model permits improved model performance. This known benefit in BRADA is applicable to the combination of Neumann, BRADA, and PENG as they both share characteristics and capabilities, namely, they are directed to models for segmentation. Therefore, it would have been recognized that modifying the combination of Neumann, BRADA, and PENG’s 2 model process to include BRADA’s training of the second model would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate BRADA’s training of the second model in models for segmentation and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
PENG further teaches use of multi-level annotated image ([0101]: “For example, if the model is to be trained for segmenting bone from non-bone material, the annotation is required to differentiate and identify the pixels/voxels of bone from those of non-bone and thus comprise segmentation data; if the model goes further such that it is necessary to identify each piece of bone, the annotation should also differentiate and label the pixels/voxels of each bone and thus additionally include identification data”)
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann and BRADA’s two model segmentation system to include PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, the combination of Neumann and BRADA’s two model segmentation system as modified by PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation can yield a predictable result of two models for segmentation, where the first model performs binary segmentation and the second performs multi-level segmentation for overall improved segmentation since the combination of Neumann and BRADA already teach a two model segmentation proves using a coarse and a more refined model, PENG simply just defines methods that can be used for the coarse segmentation (binary segmentation) and more refined segmentation (multi-level segmentation). Thus, a person of ordinary skill would have appreciated including in the combination of Neumann and BRADA’s two model segmentation system the ability to do PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The combination of Neumann, BRADA, and PENG does not expressly disclose using a training and a test set for the second model’s multi-level annotated images
However, SHI teaches using a test and a training set for training and testing operations of models ([0059]: “The obtained training data is divided into training set data, validation set data and test set data, which effectively prevents over-fitting of the model and further improves the reliability of the established DNN model of the vehicle”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann, BRADA, and PENG’s annotation data to include SHI’s use of a training and testing dataset because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify the combination of Neumann, BRADA, and PENG to include SHI is expressly provided by SHI, stating that splitting data into training a testing results in a more reliable model ([0059]: “The obtained training data is divided into training set data, validation set data and test set data, which effectively prevents over-fitting of the model and further improves the reliability of the established DNN model of the vehicle”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Neumann, BRADA, and PENG’s annotation data to include SHI’s use of a training and testing dataset with the motivation of improving model reliability. The person of ordinary skill in the art would have recognized the benefit of improved model reliability.
Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over Neumann et al. (“3D analysis of high-aspect ratio features in 3D-NAND” Hereinafter “Neumann”) in view of BRADA et al. (US 20200294239 A1 Hereinafter “BRADA”) in further view of PENG (US 20210182622 A1 Hereinafter “PENG”) in further view of CONG et al. (CN 202010079085 Hereinafter “CONG”) in further view of Liba et al. (US 20220230323 A1 Hereinafter “Liba”).
Regarding claim 9, the combination of Neumann, BRADA, PENG, and CONG teaches the method of claim 8, further comprising
While PENG does disclose the idea of using corrected data for training a model ([0115]: “if the annotator is satisfied with only a part (but not all) of preliminary result 110, the annotator moves the satisfactory part only of preliminary result 110 into annotated image window 104, then uses annotation tools 106 of annotated image window 104 to correct or complete segmentation and identification”. The corrected segmentation image acts as a ground truth image, “When the annotator has either completed the annotation of a partially satisfactory preliminary result 110 or annotated the image him- or herself, the set of annotated original images 114 are now in annotated image window 104, and are stored as ground truth images 66” [0117]. These ground truth images are used to train a model “After the one or more ground truth images 66 are collected in this manner, a segmentation and identification model is trained using the annotated original images, that is, ground truth images 66”), the combination of Neumann, BRADA, and PENG does not expressly disclose re-training the first model with it.
However, Liba teaches re-training a model corrected segmentation data ([0056]: “The inference results that are not accurately segmented by the machine-learned model 200 can be manually or computationally segmented again and the machine-learned model 200 can be re-trained with the corrected images”).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann, BRADA, PENG, and CONG’s first model to include Liba’s re-training of a model using corrected image data from the model because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, Liba’s re-training of a model using corrected image data from the model permits improved model using improved training data “The active-learning pipeline can be executed again, using additional training data, to improve the machine-learned model 200” ([0056]). This known benefit in Liba is applicable to the combination of Neumann, BRADA, PENG, and CONG’s first model as they both share characteristics and capabilities, namely, they are directed to models trained to segment images. Therefore, it would have been recognized that modifying the combination of Neumann, BRADA, PENG, and CONG’s first model to include Liba’s re-training of a model using corrected image data from the model would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate Liba’s re-training of a model using corrected image data from the model in models trained to segment images and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over Neumann et al. (“3D analysis of high-aspect ratio features in 3D-NAND” Hereinafter “Neumann”) in view of BRADA et al. (US 20200294239 A1 Hereinafter “BRADA”) in further view of PENG (US 20210182622 A1 Hereinafter “PENG”) in further view of CONG et al. (CN 202010079085 Hereinafter “CONG”) in further view of Liba et al. (US 20220230323 A1 Hereinafter “Liba”) in further view of SHI et al. (US 20210339743 A1 Hereinafter “SHI”).
Regarding claim 10, the combination of Neumann, BRADA, PENG, CONG, and Liba teaches the method claim 9, in addition, BRADA further teaches wherein:
training the second machine learning logic is based on a first part of the ([0022]: “Next, a second stage of the segmentation process may refine the initially generated segmentation map using a refinement network (also referred to herein as a cascaded network) that is trained based on synthetic (e.g., artificially created) images”);
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann, BRADA, and PENG’s 2 model process to include BRADA’s training of the second model because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, BRADA’s training of the second model permits improved model performance. This known benefit in BRADA is applicable to the combination of Neumann, BRADA, and PENG as they both share characteristics and capabilities, namely, they are directed to models for segmentation. Therefore, it would have been recognized that modifying the combination of Neumann, BRADA, and PENG’s 2 model process to include BRADA’s training of the second model would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate BRADA’s training of the second model in models for segmentation and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
PENG further teaches use of multi-level annotated image ([0101]: “For example, if the model is to be trained for segmenting bone from non-bone material, the annotation is required to differentiate and identify the pixels/voxels of bone from those of non-bone and thus comprise segmentation data; if the model goes further such that it is necessary to identify each piece of bone, the annotation should also differentiate and label the pixels/voxels of each bone and thus additionally include identification data”)
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann and BRADA’s two model segmentation system to include PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, the combination of Neumann and BRADA’s two model segmentation system as modified by PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation can yield a predictable result of two models for segmentation, where the first model performs binary segmentation and the second performs multi-level segmentation for overall improved segmentation since the combination of Neumann and BRADA already teach a two model segmentation proves using a coarse and a more refined model, PENG simply just defines methods that can be used for the coarse segmentation (binary segmentation) and more refined segmentation (multi-level segmentation). Thus, a person of ordinary skill would have appreciated including in the combination of Neumann and BRADA’s two model segmentation system the ability to do PENG’s binary annotation for training and binary segmentation alongside multi-level annotation for training and multi-level segmentation since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The combination of Neumann, BRADA, PENG, CONG, and Liba does not expressly disclose using a training and a test set for the second model’s multi-level annotated images
However, SHI teaches using a test and a training set for training and testing operations of models ([0059]: “The obtained training data is divided into training set data, validation set data and test set data, which effectively prevents over-fitting of the model and further improves the reliability of the established DNN model of the vehicle”).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify the combination of Neumann, BRADA, PENG, CONG, and Liba’s annotation data to include SHI’s use of a training and testing dataset because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify the combination of Neumann, BRADA, PENG, CONG, and Liba to include SHI is expressly provided by SHI, stating that splitting data into training a testing results in a more reliable model ([0059]: “The obtained training data is divided into training set data, validation set data and test set data, which effectively prevents over-fitting of the model and further improves the reliability of the established DNN model of the vehicle”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Neumann, BRADA, PENG, CONG, and Liba’s annotation data to include SHI’s use of a training and testing dataset with the motivation of improving model reliability. The person of ordinary skill in the art would have recognized the benefit of improved model reliability.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Subrahmanyan (US 20200173772 A1) teaches radial cross sections of HAR features.
STEIMAN (US 20220036538 A1) teaches training a model to generate segmentation map, and input map and training data in to find second features (image features)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFANO A DARDANO whose telephone number is (703)756-4543. The examiner can normally be reached Monday - Friday 11:00 - 7:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEFANO ANTHONY DARDANO/ Examiner, Art Unit 2663
/GREGORY A MORSE/ Supervisory Patent Examiner, Art Unit 2698