DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the Amendment filed on March 25, 2026. Claims 1-20 are pending in the case. Claims 1, 15, 17, and 19 are amended. Claims 1, 15, and 19 are the independent claims.
This action is non-final.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 25, 2026 has been entered.
Applicant’s Response
In the Amendment filed on March 25, 2026, Applicant amended the claims and provided arguments in response to the rejection of the claims under 35 USC 103 in the previous office action.
Response to Argument/Amendment
Applicant’s amendments to the claims in response to the rejection of the claims under 35 USC 103 in the previous office action are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant appears to note that Maloney and the instant Application have different motivations, and that the claims have been amended to recite additional limitations which are not taught by the previously cited references, including at least “the sparse dataset characterized as not sufficient for classification” and “selecting features which are a subset of the sparse dataset by random feature selection.” Applicant additionally appears to argue that Ozcam’s teachings are not analogous to “creating one or more batches of augmented data by adding the synthetic data to the sparse dataset.” However, Applicant’s arguments are moot in view of the new grounds of rejection provided below.
Claim Rejections – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102€, (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1-3, 5-8, 10-12, 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Moloney et al. (US 20210201526 A1) in view of Ozcan et al. (US 20190333199 A1), further in view of Bannerjee et al. (US 20210201003 A1), further in view of Nakazawa et al. (US 20200380302 A1).
With respect to claim 1, Moloney teaches an edge device (e.g. paragraph 0158, Figs. 47-52 show exemplary computer architectures to be used in accordance with described embodiments; paragraph 0172, Fig. 48, mesh network of IoT devices operating as fog device at the edge of cloud computing network; paragraph 0174, IoT devices include gateways 4804, which may be edge devices that provide communication and may also provide the backend process function for data obtained from sensors; data aggregators collect data from sensors and perform back end processing function for the analysis; sensors may be full IoT devices, capable of both collecting data and processing the data; paragraph 0176, fog provided from IoT devices may be presented as a single device located at the edge of the cloud, and may perform tasks including machine learning) that is configured to execute machine learning procedures with a sparse dataset (e.g. paragraphs 0075-0076, sparse volumetric representations; octree representation embodying sparse representation storing only voxels for which there is actual geometry in the real world scene; sparse voxel octree; paragraph 0084, performing classifications using CNN on octree representations; same octree voxel structure may contain same information at multiple levels of detail; single training dataset based on volumetric data models that cover all levels of detail; i.e. a dataset used for training a machine learning model which comprises sparse representations, and is therefore a sparse dataset), edge device comprising:
one or more sensor interfaces (e.g. paragraph 0174, sensors 4828);
one or more microcontrollers (MCUs) (e.g. paragraph 0174, gateways, data aggregators, and sensors, each capable of providing processing functions; paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraph 0201, processor is type of hardware device used in connection with described implementations);
one or more memories in communication with the one or more microcontrollers, wherein the one or more memories contain one or more executable instructions that cause the one or more microcontrollers to perform operations (e.g. paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraphs 0199-0205, storage/memory including instructions to implement described techniques, executed by processor) that include at least:
receiving one or more batches of real-time sensor data via the one or more sensor interfaces, the one or more batches defining the sparse dataset (e.g. paragraph 0059, use of volumetric data-based systems having high data processing rates, such as up to 130 fps/7 msecs; paragraph 0060, sparse volumetric data of object captured using optical sensor; paragraph 0062, volumetric data structure allowing for updating/use in real time; paragraph 0063, volumetric data structure allowing for warnings of impending collisions; paragraph 0064, use of volumetric data in applications such as robotics, head mounted displays for AR/VR, phones/tablets, etc.; paragraph 0069, measured geometry voxels constructed using SLAM pipeline which uses active and passive sensors; paragraph 0072, processing volumetric scene data from output of SLAM pipeline; paragraph 0075, real world object/scene embodied as sparse voxel representation/volumetric model; paragraph 0084, single training dataset based on volumetric models; paragraph 0107, real training samples (i.e. training samples/data based on real world information));
creating one or more batches of augmented data by adding synthetic data to the sparse dataset (e.g. paragraph 0098, synthetic training sets developed; synthetic training data combined with other training data to form a training data set at least partially composed of synthetic training data; paragraph 0107, upon generating training data samples, the samples may be added to or included with other real or synthetically-generated training samples to build training data set for deep learning model); and
training a machine learning procedure using the augmented data (e.g. paragraph 0098, synthetic training sets utilized to train neural network or other deep reinforcement learning models; training data set at least partially composed of synthetic training data utilized to train machine learning models; paragraph 0107, training model).
Moloney does not explicitly disclose where the augmented data is at least five times greater than the sparse data. However, Ozcan teaches where the augmented data is at least five times greater than the sparse data (e.g. paragraph 0080, augmenting training dataset to effectively increase the training data size by six-fold).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney and Ozcan in front of him to have modified the teachings of Moloney (directed to a deep learning system), to incorporate the teachings of Ozcan (directed to deep learning microscopy using a training dataset of images) to utilize the capability to augment the sparse data to generate the augmented data, such that the augmented data is six times greater (i.e. a six-fold increase) than the sparse data (as taught by Ozcan). One of ordinary skill would have been motivated to perform such a modification in order to provide an increased amount of training data, allowing rapid training of the deep neural network while at the same time containing distinct sample features in each patch, further allowing determination of the best network model, and helping to avoid overfitting to the training image data as described in Ozcan (paragraphs 0080-0081).
Maloney and Ozcan do not explicitly disclose
the sparse dataset characterized as not sufficient for classification;
selecting features which are a subset of the sparse dataset;
adding white noise to the selected features from the sparse dataset to form synthetic data; and
that the augmented data is generated by adding this synthetic data to the sparse dataset.
However, Bannerjee teaches
the sparse dataset characterized as not sufficient for classification (e.g. paragraphs 0039-0040, imbalance/sparsity of training dataset; generating synthetic data to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better balanced; training dataset is imbalanced, and may provide sufficient examples for some types of data while being sparse or insufficient for other types of data, such as a training data set of images of facial expressions which is sufficient with respect to alert samples, but insufficient with respect to drowsy samples);
selecting features which are a subset of the sparse dataset (e.g. paragraph 0049, feature selector selecting which of the plurality of values associated with the features can be used for training of a component such as a GAN which can be used to generate synthetic vectors for further training, for other training datasets, etc.);
adding white noise to the selected features from the sparse dataset to form synthetic data (e.g. paragraph 0040, generating synthetic data from noise vector which provides a random number or seed; paragraph 0045, generating sets of synthetic vectors based on noise values/seeds); and
that the augmented data is generated by adding this synthetic data to the sparse dataset (e.g. paragraph 0045, adding generated vectors to original dataset to augment the sparse class and balance distribution of the training).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, and Bannerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system) and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Bannerjee (directed to synthetic data for neural network training using vectors) to utilize the capability to characterize the sparse dataset as not sufficient for classification, and to select features which are a subset of the sparse dataset. One of ordinary skill would have been motivated to perform such a modification in order to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better valanced, and better trained to perform classification tasks as described in Bannerjee (paragraph 0039).
Maloney, Ozcan, and Bannerjee do not explicitly disclose selecting features by random feature selection. However, Nakazawa teaches selecting features by random feature selection (e.g. paragraphs 0026-0030, input data is data to be input to machine learning model; feature portion is a portion that is a feature of the input data; in the input data a plurality of feature portions may exist; paragraph 0114-0118, randomly selecting plurality of portions to be processed from among the feature portion; Fig. 9, illustrating how portions to be processed are randomly selected; randomly selecting portions to be processed from among the feature portion allows selection of plurality of portions to be processed that are different from each other based on relatively simple processing, reducing processing load, and allowing efficient data augmentation to be implemented; acquiring plurality of processed images based on the identified feature portions, and performing data augmentation based on the identified images).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Bannerjee, and Nakazawa in front of him to have modified the teachings of Moloney (directed to a deep learning system), Ozcan (directed to deep learning microscopy using a training dataset of images), and Bannerjee (directed to synthetic data for neural network training using vectors), to incorporate the teachings of Nakazawa (directed to data augmentation) to utilize the capability to select the features (as taught by Bannerjee) by random feature selection (as taught by Nakazawa). One of ordinary skill would have been motivated to perform such a modification in order to allow selection of features to be processed that are different from each other based on relatively simple processing, reducing processing load, and allowing efficient data augmentation to be implemented as described in Nakazawa (paragraph 0117).
With respect to claim 15, Moloney teaches a mobile handheld computing device (e.g. paragraph 0158, Figs. 47-52 show exemplary computer architectures to be used in accordance with described embodiments; paragraph 0172, Fig. 48, mesh network of IoT devices operating as fog device at the edge of cloud computing network; paragraph 0174, IoT devices include gateways 4804, which may be edge devices that provide communication and may also provide the backend process function for data obtained from sensors; data aggregators collect data from sensors and perform back end processing function for the analysis; sensors may be full IoT devices, capable of both collecting data and processing the data; paragraph 0176, fog provided from IoT devices may be presented as a single device located at the edge of the cloud, and may perform tasks including machine learning; paragraph 0178, IoT device/gateway embodied by aspects of tablet PC, PDA, mobile telephone or smartphone, etc.) that is configured to execute machine learning procedures with a sparse dataset from at least one sensor (e.g. paragraph 0060, data from optical sensor; paragraphs 0075-0076, sparse volumetric representations; octree representation embodying sparse representation storing only voxels for which there is actual geometry in the real world scene; sparse voxel octree; paragraph 0084, performing classifications using CNN on octree representations; same octree voxel structure may contain same information at multiple levels of detail; single training dataset based on volumetric data models that cover all levels of detail; i.e. a dataset used for training a machine learning model which comprises sparse representations, and is therefore a sparse dataset), the mobile handheld computing device comprising:
at least a receiver (e.g. paragraph 0174, IoT devices include functionality/components for communications, obtaining/collecting data from sensors, etc.; i.e. the devices include hardware for communication with other devices, including receiving sensor data, analogous to a receiver; paragraph 0188, IoT device having a transceiver);
one or more processing devices (e.g. paragraph 0174, gateways, data aggregators, and sensors, each capable of providing processing functions; paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraph 0201, processor is type of hardware device used in connection with described implementations);
one or more memories in communication with the one or more processing devices, wherein the one or more memories contain one or more executable instructions that cause the one or more processing devices to perform operations (e.g. paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraphs 0199-0205, storage/memory including instructions to implement described techniques, executed by processor) that include at least:
receiving the sparse data via the receiver from one or more mobile devices (e.g. paragraph 0059, use of volumetric data-based systems having high data processing rates, such as up to 130 fps/7 msecs; paragraph 0060, sparse volumetric data of object captured using optical sensor; paragraph 0062, volumetric data structure allowing for updating/use in real time; paragraph 0063, volumetric data structure allowing for warnings of impending collisions; paragraph 0064, use of volumetric data in applications such as robotics, head mounted displays for AR/VR, phones/tablets, etc.; paragraph 0069, measured geometry voxels constructed using SLAM pipeline which uses active and passive sensors; paragraph 0072, processing volumetric scene data from output of SLAM pipeline; paragraph 0075, real world object/scene embodied as sparse voxel representation/volumetric model; paragraph 0084, single training dataset based on volumetric models; paragraph 0107, real training samples (i.e. training samples/data based on real world information));
creating unique synthetic data sets from the sparse data and creating augmented data by adding the unique synthetic datasets to generate the augmented data (e.g. paragraph 0098, synthetic training sets developed; synthetic training data combined with other training data to form a training data set at least partially composed of synthetic training data; paragraph 0107, upon generating training data samples, the samples may be added to or included with other real or synthetically-generated training samples to build training data set for deep learning model); and
training one or more machine learning models with the augmented data, wherein the augmented data has a greater variety of features compared with the sparse data (e.g. paragraph 0098, synthetic training sets utilized to train neural network or other deep reinforcement learning models; training data set at least partially composed of synthetic training data utilized to train machine learning models; paragraph 0099, using 3D model to generate a variety of different views of a given subject or even a collection of different subjects (such as varying combinations of products positioned next to each other) to generate the synthetic set of training data; paragraph 0101, using 3D model to capture a number and variety of images to satisfy a complete and diverse collection of images to capture the subject; paragraph 0104, capturing a variety of views of the 3D model including views in varied lighting, environments, and conditions; processing with sensor filter to degrade images to generate true to life images; paragraph 0107, training model).
Moloney does not explicitly disclose where the augmented data is at least five times greater than the sparse data. However, Ozcan teaches where the augmented data is at least five times greater than the sparse data (e.g. paragraph 0080, augmenting training dataset to effectively increase the training data size by six-fold).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, and Ozcan in front of him to have modified the teachings of Moloney (directed to a deep learning system), to incorporate the teachings of Ozcan (directed to deep learning microscopy using a training dataset of images) to utilize the capability to augment the sparse data to generate the augmented data, such that the augmented data is six times greater (i.e. a six-fold increase) than the sparse data (as taught by Ozcan). One of ordinary skill would have been motivated to perform such a modification in order to provide an increased amount of training data, allowing rapid training of the deep neural network while at the same time containing distinct sample features in each patch, further allowing determination of the best network model, and helping to avoid overfitting to the training image data as described in Ozcan (paragraphs 0080-0081).
Maloney and Ozcan do not explicitly disclose
the sparse dataset insufficient for classification;
the creating the unique synthetic data sets is by identifying feature embeddings in the sparse data;
adding white noise to the identified feature embeddings to create the unique synthetic data sets; and
that the augmented data is created by adding the unique synthetic data sets.
However, Bannerjee teaches
the sparse dataset insufficient for classification (e.g. paragraphs 0039-0040, imbalance/sparsity of training dataset; generating synthetic data to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better balanced; training dataset is imbalanced, and may provide sufficient examples for some types of data while being sparse or insufficient for other types of data, such as a training data set of images of facial expressions which is sufficient with respect to alert samples, but insufficient with respect to drowsy samples);
the creating the unique synthetic data sets is by identifying feature embeddings in the sparse data (e.g. paragraph 0049, feature selector selecting which of the plurality of values associated with the features can be used for training of a component such as a GAN which can be used to generate synthetic vectors for further training, for other training datasets, etc.);
adding white noise to the identified feature embeddings to create the unique synthetic data sets (e.g. paragraph 0040, generating synthetic data from noise vector which provides a random number or seed; paragraph 0045, generating sets of synthetic vectors based on noise values/seeds); and
that the augmented data is created by adding the unique synthetic data sets (e.g. paragraph 0045, adding generated vectors to original dataset to augment the sparse class and balance distribution of the training).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, and Bannerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system) and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Bannerjee (directed to synthetic data for neural network training using vectors) to utilize the capability to characterize the sparse dataset as not sufficient for classification, and to select features which are a subset of the sparse dataset. One of ordinary skill would have been motivated to perform such a modification in order to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better valanced, and better trained to perform classification tasks as described in Bannerjee (paragraph 0039).
Maloney, Ozcan, and Bannerjee do not explicitly disclose identifying feature embeddings by random feature selection for each unique synthetic data set. However, Nakazawa teaches identifying feature embeddings by random feature selection for each unique synthetic data set (e.g. paragraphs 0026-0030, input data is data to be input to machine learning model; feature portion is a portion that is a feature of the input data; in the input data a plurality of feature portions may exist; paragraph 0114-0118, randomly selecting plurality of portions to be processed from among the feature portion; Fig. 9, illustrating how portions to be processed are randomly selected; randomly selecting portions to be processed from among the feature portion allows selection of plurality of portions to be processed that are different from each other based on relatively simple processing, reducing processing load, and allowing efficient data augmentation to be implemented; acquiring plurality of processed images based on the identified feature portions, and performing data augmentation based on the identified images).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Bannerjee, and Nakazawa in front of him to have modified the teachings of Moloney (directed to a deep learning system), Ozcan (directed to deep learning microscopy using a training dataset of images), and Bannerjee (directed to synthetic data for neural network training using vectors), to incorporate the teachings of Nakazawa (directed to data augmentation) to utilize the capability to select the features (as taught by Bannerjee) by random feature selection (as taught by Nakazawa). One of ordinary skill would have been motivated to perform such a modification in order to allow selection of features to be processed that are different from each other based on relatively simple processing, reducing processing load, and allowing efficient data augmentation to be implemented as described in Nakazawa (paragraph 0117).
With respect to claim 19, Moloney teaches a resource-constrained edge device (e.g. paragraph 0133, resource constrained inference edge devices; using network compression techniques to lower compute and memory demands; paragraph 0158, Figs. 47-52 show exemplary computer architectures to be used in accordance with described embodiments; paragraph 0172, Fig. 48, mesh network of IoT devices operating as fog device at the edge of cloud computing network; paragraph 0174, IoT devices include gateways 4804, which may be edge devices that provide communication and may also provide the backend process function for data obtained from sensors; data aggregators collect data from sensors and perform back end processing function for the analysis; sensors may be full IoT devices, capable of both collecting data and processing the data; paragraph 0176, fog provided from IoT devices may be presented as a single device located at the edge of the cloud, and may perform tasks including machine learning) that is configured to execute machine learning procedures with a sparse dataset (e.g. paragraphs 0075-0076, sparse volumetric representations; octree representation embodying sparse representation storing only voxels for which there is actual geometry in the real world scene; sparse voxel octree; paragraph 0084, performing classifications using CNN on octree representations; same octree voxel structure may contain same information at multiple levels of detail; single training dataset based on volumetric data models that cover all levels of detail; i.e. a dataset used for training a machine learning model which comprises sparse representations, and is therefore a sparse dataset), the resource-constrained edge device comprising:
one or more sensor interfaces (e.g. paragraph 0174, sensors 4828);
one or more microcontrollers (MCUs) (e.g. paragraph 0174, gateways, data aggregators, and sensors, each capable of providing processing functions; paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraph 0201, processor is type of hardware device used in connection with described implementations);
one or more memories in communication with the one or more microcontrollers, wherein the one or more memories contain one or more executable instructions that cause the one or more microcontrollers to perform operations (e.g. paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraphs 0199-0205, storage/memory including instructions to implement described techniques, executed by processor) that include at least:
receiving one or more batches of real-time sensor data via the one or more sensor interfaces, the one or more batches defining the sparse dataset (e.g. paragraph 0059, use of volumetric data-based systems having high data processing rates, such as up to 130 fps/7 msecs; paragraph 0060, sparse volumetric data of object captured using optical sensor; paragraph 0062, volumetric data structure allowing for updating/use in real time; paragraph 0063, volumetric data structure allowing for warnings of impending collisions; paragraph 0064, use of volumetric data in applications such as robotics, head mounted displays for AR/VR, phones/tablets, etc.; paragraph 0069, measured geometry voxels constructed using SLAM pipeline which uses active and passive sensors; paragraph 0072, processing volumetric scene data from output of SLAM pipeline; paragraph 0075, real world object/scene embodied as sparse voxel representation/volumetric model; paragraph 0084, single training dataset based on volumetric models; paragraph 0107, real training samples (i.e. training samples/data based on real world information));
creating one or more batches of augmented data by adding synthetic data to the sparse dataset (e.g. paragraph 0098, synthetic training sets developed; synthetic training data combined with other training data to form a training data set at least partially composed of synthetic training data; paragraph 0107, upon generating training data samples, the samples may be added to or included with other real or synthetically-generated training samples to build training data set for deep learning model); and
training at least a discriminator at least in part with the one or more batches of augmented data (e.g. paragraph 0098, synthetic training sets utilized to train neural network or other deep reinforcement learning models; training data set at least partially composed of synthetic training data utilized to train machine learning models; paragraph 0107, training model; paragraph 0111, Siamese network utilized as the machine learning model trained using the synthetic training data; deep Siamese networks are a type of two-stream neural network models for discriminative embedding learning).
Moloney does not explicitly disclose each batch of augmented data having a size which is at least five times greater than the sparse dataset. However, Ozcan teaches each batch of augmented data having a size which is at least five times greater than the sparse dataset (e.g. paragraph 0080, augmenting training dataset to effectively increase the training data size by six-fold).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, and Ozcan in front of him to have modified the teachings of Moloney (directed to a deep learning system), to incorporate the teachings of Ozcan (directed to deep learning microscopy using a training dataset of images) to utilize the capability to augment the sparse data to generate the augmented data, such that the augmented data is six times greater (i.e. a six-fold increase) than the sparse data (as taught by Ozcan). One of ordinary skill would have been motivated to perform such a modification in order to provide an increased amount of training data, allowing rapid training of the deep neural network while at the same time containing distinct sample features in each patch, further allowing determination of the best network model, and helping to avoid overfitting to the training image data as described in Ozcan (paragraphs 0080-0081).
Maloney and Ozcan do not explicitly disclose
the sparse dataset characterized as not sufficient for classification;
identifying and extracting features from the sparse dataset;
adding white noise to form synthetic data; and
that the augmented data is created by adding this synthetic data to the sparse dataset.
However, Bannerjee teaches
the sparse dataset characterized as not sufficient for classification (e.g. paragraphs 0039-0040, imbalance/sparsity of training dataset; generating synthetic data to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better balanced; training dataset is imbalanced, and may provide sufficient examples for some types of data while being sparse or insufficient for other types of data, such as a training data set of images of facial expressions which is sufficient with respect to alert samples, but insufficient with respect to drowsy samples);
identifying and extracting features from the sparse dataset (e.g. paragraph 0049, feature selector selecting which of the plurality of values associated with the features can be used for training of a component such as a GAN which can be used to generate synthetic vectors for further training, for other training datasets, etc.);
adding white noise to form synthetic data (e.g. paragraph 0040, generating synthetic data from noise vector which provides a random number or seed; paragraph 0045, generating sets of synthetic vectors based on noise values/seeds); and
that the augmented data is created by adding this synthetic data to the sparse dataset (e.g. paragraph 0045, adding generated vectors to original dataset to augment the sparse class and balance distribution of the training).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, and Bannerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system) and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Bannerjee (directed to synthetic data for neural network training using vectors) to utilize the capability to characterize the sparse dataset as not sufficient for classification, and to select features which are a subset of the sparse dataset. One of ordinary skill would have been motivated to perform such a modification in order to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better valanced, and better trained to perform classification tasks as described in Bannerjee (paragraph 0039).
Maloney, Ozcan, and Bannerjee do not explicitly disclose identifying and extracting features by random feature selection. However, Nakazawa teaches identifying and extracting features by random feature selection (e.g. paragraphs 0026-0030, input data is data to be input to machine learning model; feature portion is a portion that is a feature of the input data; in the input data a plurality of feature portions may exist; paragraph 0114-0118, randomly selecting plurality of portions to be processed from among the feature portion; Fig. 9, illustrating how portions to be processed are randomly selected; randomly selecting portions to be processed from among the feature portion allows selection of plurality of portions to be processed that are different from each other based on relatively simple processing, reducing processing load, and allowing efficient data augmentation to be implemented; acquiring plurality of processed images based on the identified feature portions, and performing data augmentation based on the identified images).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Bannerjee, and Nakazawa in front of him to have modified the teachings of Moloney (directed to a deep learning system), Ozcan (directed to deep learning microscopy using a training dataset of images), and Bannerjee (directed to synthetic data for neural network training using vectors), to incorporate the teachings of Nakazawa (directed to data augmentation) to utilize the capability to select the features (as taught by Bannerjee) by random feature selection (as taught by Nakazawa). One of ordinary skill would have been motivated to perform such a modification in order to allow selection of features to be processed that are different from each other based on relatively simple processing, reducing processing load, and allowing efficient data augmentation to be implemented as described in Nakazawa (paragraph 0117).
With respect to claim 18, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 15 as previously discussed, and Banerjee teaches wherein the training one or more machine learning models with the augmented data comprises: training a discriminator of a generative adversarial network with the augmented data; and training a generator of the generative adversarial network at least in part with the trained discriminator (e.g. paragraph 0035, generative adversarial network based on generator network and discriminator network; discriminator trained using training dataset, while the generator is trained based on its ability to fool the discriminator into determining that a synthesized candidate or vector is actually real).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Nakazawa, and Banerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system), Nakazawa (directed to data augmentation), and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Banerjee (directed to synthetic data for neural network training) to train a discriminator network of a GAN using the synthetic/augmented data, and then to train a generator network of the GAN using the trained discriminator, such as by training the generator based on whether it can fool the trained discriminator (as taught by Banerjee). One of ordinary skill would have been motivated to perform such a modification in order to improve efficacy of generation of synthetic vectors/data as described in Banerjee (paragraphs 0005, 0032).
With respect to claim 20, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 19 as previously discussed, and Moloney further teaches wherein the resource-constrained edge device is an Internet of Things (IoT) device (e.g. paragraph 0133, resource constrained inference edge devices; paragraph 0158, Figs. 47-52 show exemplary computer architectures to be used in accordance with described embodiments; paragraph 0172, Fig. 48, mesh network of IoT devices operating as fog device at the edge of cloud computing network; paragraph 0174, IoT devices include gateways 4804, which may be edge devices that provide communication and may also provide the backend process function for data obtained from sensors; data aggregators collect data from sensors and perform back end processing function for the analysis; sensors may be full IoT devices, capable of both collecting data and processing the data; paragraph 0176, fog provided from IoT devices may be presented as a single device located at the edge of the cloud, and may perform tasks including machine learning).
With respect to claim 2, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed, and Moloney further teaches wherein the edge device is a resource-constrained edge device (e.g. paragraph 0133, resource constrained inference edge devices; using network compression techniques to lower compute and memory demands; paragraph 0158, Figs. 47-52 show exemplary computer architectures to be used in accordance with described embodiments; paragraph 0172, Fig. 48, mesh network of IoT devices operating as fog device at the edge of cloud computing network; paragraph 0174, IoT devices include gateways 4804, which may be edge devices that provide communication and may also provide the backend process function for data obtained from sensors; data aggregators collect data from sensors and perform back end processing function for the analysis; sensors may be full IoT devices, capable of both collecting data and processing the data; paragraph 0176, fog provided from IoT devices may be presented as a single device located at the edge of the cloud, and may perform tasks including machine learning).
With respect to claim 3, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 2 as previously discussed, and Moloney further teaches wherein the resource-constrained edge device is configured to perform both training and inference (e.g. paragraph 0133, resource constrained inference edge device; paragraph 0176, IoT/edge device(s) performing tasks including machine learning; i.e. the edge device may utilize the deployed models to perform both inference and training/learning).
With respect to claim 5, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 4 as previously discussed, and Moloney further teaches wherein the edge device is a resource-constrained edge device and is configured to store at least a trained inference model in the one or more memories (e.g. paragraphs 0133-0134, deploying DNN on resource constrained edge device, such as by using network compression techniques to lower compute and memory demands; dynamically/automatically reducing size of neural networks for use by particular machine learning hardware; reducing size of neural network to be stored and operated upon by given machine learning hardware; paragraph 0176, IoT/edge device(s) performing tasks including machine learning).
With respect to claim 6, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed, and Moloney further teaches wherein the one or more memories include at least a memory controller and wherein the one or more memories are in communication, via the memory controller, with an external memory that is external to the edge device (e.g. paragraph 0074, hardware and software elements sharing access to DRAM controller which allows data to be stored in shared DDR memory device; paragraph 0079, allowing lower and less frequently accessed parts of voxel octree in external memory; paragraph 0091, Fig. 12, hardware circuitry/logic for culling trivial operations in accordance with embodiments, including external DDR controller 1250 and DDR storage 1270; paragraph 0153, embedded devices equipped with various sensors capable to taking measurements, transmitted to central server, which performs aggregation and numerical solve of measurements to produce map which can then be redistributed back to the embedded devices; paragraph 0178, functionality local to devices collectively, and collection of devices may provide or consume results provided by other remote machines; paragraph 0181, IoT devices communicating with other devices, including requesting or providing information from/to the other devices; paragraph 0209, memory controller logic communicating with memory elements storing various data used by processors).
With respect to claim 7, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed, and Moloney further teaches wherein the one or more microcontrollers include at least one of: (a) at least one microcontroller configured to at least (1) boot an operating system and (2) activate at least one other microcontroller; (b) at least one microcontroller configured to receive sensor data via the one or more sensor interfaces; or (c) at least one microcontroller configured to perform at least machine learning mathematical operations (e.g. paragraph 0133, resource constrained inference devices; paragraph 0174, gateways, data aggregators, and sensors, each capable of providing processing functions, including obtaining/collecting data from sensors; paragraph 0176, IoT devices presented as single device providing computing and storage resources to perform processing tasks including machine learning; paragraph 0178, IoT device, instructions executed to cause the electronic processing system to perform described methods; any machine capable of executing instructions; processor based systems; control by processor to execute instructions to execute described methodologies; paragraph 0201, processor is type of hardware device used in connection with described implementations).
With respect to claim 8, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed, and Moloney further teaches wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: training a machine learning model with the one or more batches of augmented data (e.g. paragraph 0098, synthetic training sets utilized to train neural network or other deep reinforcement learning models; training data set at least partially composed of synthetic training data utilized to train machine learning models; paragraph 0107, training model).
With respect to claim 10, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed, and Moloney further teaches wherein receiving one or more batches of real-time sensor data via the one or more sensor interfaces, the one or more batches defining the sparse dataset comprises: receiving as the one or more batches of real time sensor data one or more batches of at least one of audio data, image data, numerical data or text data (e.g. paragraph 0123, stream of image data; paragraph 0221, plurality of training samples includes digital images and the sensor device includes a camera sensor).
With respect to claim 16, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 15 as previously discussed, and Moloney further teaches wherein the received sparse data received from one or more mobile devices includes at least one of images, audio files, or text files (e.g. paragraph 0075, real world object described in terms of voxels in a sparse manner; paragraph 0123, stream of image data; paragraph 0221, plurality of training samples includes digital images and the sensor device includes a camera sensor).
With respect to claim 11, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed, and Moloney further teaches wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: at least one of automatically or dynamically extracting one of more feature embeddings for at least one batch of received real-time sensor data (e.g. paragraph 0116-0117, machine learning model with 3D voxel inputs; extracting feature vectors of respective inputs; paragraph 0124, extracting feature vector from image pair; compare with specification of instant application at 0055, indicating that feature embedding/extraction results in representing the feature embedding as one or more vectors).
With respect to claim 12, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 11 as previously discussed. Bannerjee further teaches wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: attenuating the one or more feature embeddings to generate attenuated data, and providing the attenuated data to a generator for generation of synthetic images (e.g. paragraph 0040, generating synthetic data from noise vector which provides a random number or seed; GAN composed of generator network including upsampling generator model G which synthesizes artificial or synthetic samples of a particular domain; paragraph 0042, generator network taking as input noise vector sampled from Gaussian distribution; output of generator is synthetic vector; paragraph 0045, generating sets of synthetic vectors based on noise values/seeds; adding generated vectors to original dataset to augment the sparse class and balance distribution of the training; following each iteration of GAN training, set of synthetic vectors is generated based on noise values; paragraph 0049, converting synthetic vectors into image data including synthetic feature image data, to be applied to CNN; i.e. the feature embedding/vectors are attenuated, such as by concatenating/mixing/combining it with random noise, and this is provided to the generator for generation of synthetic vectors, which are then converted to synthetic images, such as images having the noise added to them, compare with paragraph 0072 of the specification of the instant application, indicating that data attenuation may be accomplished by insertion of additive white Gaussian noise).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Nakazawa, and Bannerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system), Nakazawa (directed to data augmentation), and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Bannerjee (directed to synthetic data for neural network training using vectors) to utilize the capability to attenuate the feature embeddings/vectors using noise, producing synthetic vectors, and provide this to a generator for generating synthetic images. One of ordinary skill would have been motivated to perform such a modification in order to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better valanced, and better trained to perform classification tasks as described in Bannerjee (paragraph 0039).
With respect to claim 14, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 12 as previously discussed. Bannerjee further teaches wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: injecting the feature embeddings with additive white Gaussian noise to create attenuated data; and providing the attenuated data to a generator of a generative adversarial network; generating, with the generator, at least some of the synthetic data (e.g. paragraph 0040, generating synthetic data from noise vector which provides a random number or seed; GAN composed of generator network including upsampling generator model G which synthesizes artificial or synthetic samples of a particular domain; paragraph 0042, generator network taking as input noise vector sampled from Gaussian distribution; output of generator is synthetic vector; paragraph 0045, generating sets of synthetic vectors based on noise values/seeds; adding generated vectors to original dataset to augment the sparse class and balance distribution of the training; following each iteration of GAN training, set of synthetic vectors is generated based on noise values; paragraph 0049, converting synthetic vectors into image data including synthetic feature image data, to be applied to CNN; i.e. the feature embedding/vectors are attenuated, such as by concatenating/mixing/combining it with random noise, and this is provided to the generator for generation of synthetic vectors, which are then converted to synthetic images, such as images having the noise added to them, compare with paragraph 0072 of the specification of the instant application, indicating that data attenuation may be accomplished by insertion of additive white Gaussian noise).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Nakazawa, and Bannerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system), Nakazawa (directed to data augmentation), and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Bannerjee (directed to synthetic data for neural network training using vectors) to utilize the capability to attenuate the feature embeddings/vectors using noise, producing synthetic vectors, and provide this to a generator for generating synthetic images. One of ordinary skill would have been motivated to perform such a modification in order to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better valanced, and better trained to perform classification tasks as described in Bannerjee (paragraph 0039).
With respect to claim 17, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 15 as previously discussed, and Moloney further teaches wherein the creating augmented data comprises: with a pattern extractor, extracting one or more feature embeddings from the sparse data (e.g. paragraph 0116-0117, machine learning model with 3D voxel inputs; extracting feature vectors of respective inputs; paragraph 0124, extracting feature vector from image pair; compare with specification of instant application at 0055, indicating that feature embedding/extraction results in representing the feature embedding as one or more vectors).
Bannerjee teaches with a data attenuator, attenuating the one or more feature embeddings to create attenuated data; providing the attenuated data as a condition to a generator of a generative adversarial network; and with the generator, generating the synthetic data based at least in part on the attenuated data (e.g. paragraph 0040, generating synthetic data from noise vector which provides a random number or seed; GAN composed of generator network including upsampling generator model G which synthesizes artificial or synthetic samples of a particular domain; paragraph 0042, generator network taking as input noise vector sampled from Gaussian distribution; output of generator is synthetic vector; paragraph 0045, generating sets of synthetic vectors based on noise values/seeds; adding generated vectors to original dataset to augment the sparse class and balance distribution of the training; following each iteration of GAN training, set of synthetic vectors is generated based on noise values; paragraph 0049, converting synthetic vectors into image data including synthetic feature image data, to be applied to CNN; i.e. the feature embedding/vectors are attenuated, such as by concatenating/mixing/combining it with random noise, and this is provided to the generator for generation of synthetic vectors, which are then converted to synthetic images, such as images having the noise added to them, compare with paragraph 0072 of the specification of the instant application, indicating that data attenuation may be accomplished by insertion of additive white Gaussian noise).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Nakazawa, and Bannerjee in front of him to have modified the teachings of Moloney (directed to a deep learning system), Nakazawa (directed to data augmentation), and Ozcan (directed to deep learning microscopy using a training dataset of images), to incorporate the teachings of Bannerjee (directed to synthetic data for neural network training using vectors) to utilize the capability to attenuate the feature embeddings/vectors using noise, producing synthetic vectors, and provide this to a generator for generating synthetic images. One of ordinary skill would have been motivated to perform such a modification in order to provide or fill in otherwise sparse training data such that a classifier trained using the training data can be better valanced, and better trained to perform classification tasks as described in Bannerjee (paragraph 0039).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa, further in view of Jain et al. (US 20180330275 A1).
With respect to claim 4, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed. Moloney does not explicitly disclose wherein the one or more memories contain limited storage of less than 32 MB. However, Jain teaches herein the one or more memories contain limited storage of less than 32 MB (e.g. paragraph 0030, prediction and training system may be implemented on a same computing device, including an IoT device; many IoT devices are resource constrained such as to include limited amounts of RAM (e.g. less than one megabyte); paragraph 0005, providing sparse matrix, vectors, labels on RAM of device, the RAM including a maximum of one megabyte storage).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Bannerjee, Nakazawa, and Jain in front of him to have modified the teachings of Moloney (directed to a deep learning system), Ozcan (directed to deep learning microscopy using a training dataset of images), Banerjee (directed to synthetic data for neural network training), and Nakazawa (directed to data augmentation), to incorporate the teachings of Jain (directed to resource efficient machine learning) to utilize, as the device, a device having memory (such as RAM) with limited storage of less than 32 MB (as taught by Jain). One of ordinary skill would have been motivated to perform such a modification in order to perform resource-efficient learning, including prediction techniques, which improve upon prior techniques by reducing model size, amount of time to make a prediction, amount of power consumed making the prediction, and/or increasing accuracy of the prediction as described in Jain (paragraphs 0014-0015).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa, further in view of Bazarsky et al. (US 20200401344 A1).
With respect to claim 9, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 1 as previously discussed. Moloney does not explicitly disclose wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: storing a first batch of augmented data in an external memory associated with the one or more memories; and storing a second batch of augmented data in the external memory, the storing of the second batch overwriting the first batch. However, Bazarsky teaches wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: storing a first batch of augmented data in an external memory associated with the one or more memories; and storing a second batch of augmented data in the external memory, the storing of the second batch overwriting the first batch (e.g. paragraph 0040-0041, Fig. 1, host 102 which may be a variety of different types of computing devices, and SSD 104 is coupled to the host via a host interface (such as a USB interface, indicating that the SDD may be external to the host device) and further comprise a controller and memory; paragraph 0051, Fig. 2, showing and describing NVM array components (i.e. part of the SSD device as shown in Fig. 1); augmented data stored in memory within the die, such as within data laches for immediate use by training components, then erased or overwritten; paragraph 0058, performing read operations repeatedly until training requirements satisfied; paragraph 0084, performing machine learning using augmented data, held in working memory for use by training components and then erased or overwritten; i.e. during a first part of training which is performed on a repeated basis, a first set of augmented data is stored in the memory and utilized until training criteria are satisfied; this stored augmented data my subsequently be overwritten when it is no longer needed, such as when a new round/set of training activities are conducted, such as using another/next set of augmented data).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Bannerjee, Nakazawa, and Bazarsky in front of him to have modified the teachings of Moloney (directed to a deep learning system), Ozcan (directed to deep learning microscopy using a training dataset of images), Banerjee (directed to synthetic data for neural network training), and Nakazawa (directed to data augmentation), to incorporate the teachings of Bazarsky (directed to storage controller having data augmentation components for use in machine learning) to utilize the capability to store a first batch of the augmented data in the external memory (such as storing a set of augmented data used for a first training activity in an external SDD) and to subsequently store a second batch of augmented data in the external memory, by overwriting the first batch (as taught by Bazarsky). One of ordinary skill would have been motivated to perform such a modification in order to perform resource-efficient learning, including prediction techniques, which improve upon prior techniques by reducing model size, amount of time to make a prediction, amount of power consumed making the prediction, and/or increasing accuracy of the prediction as described in Bazarsky (paragraphs 0014-0015).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa, further in view of Todorov et al. (US 20210089759 A1).
With respect to claim 13, Moloney in view of Ozcan, further in view of Bannerjee, further in view of Nakazawa teaches all of the limitations of claim 12 as previously discussed. Moloney does not explicitly disclose wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: randomly selecting a set of selected feature embeddings to create attenuated data and discarding the non-selected feature embeddings; providing the attenuated data to a generator of a generative adversarial network; and generating, with the generator, at least some of the synthetic data.
However, Todorov teaches wherein the one or more executable instructions further cause the one or more microcontrollers to additionally perform the following operations: randomly selecting a set of selected feature embeddings to create attenuated data and discarding the non-selected feature embeddings; providing the attenuated data to a generator of a generative adversarial network; and generating, with the generator, at least some of the synthetic data (e.g. paragraph 0026, encoding image as multi-dimensional vector comprising features, including mapping image to multi-dimensional vector of learned image features; paragraph 0037, user sending batch of images with instructions to modify vectors; paragraph 0038, mapping traits in human behavior with features of image encodings; paragraph 0039, encoding features of images/dimensionality; paragraph 0040, since dimensionality is large, employing regularization strategy such as random removal of features that may not be relevant; paragraph 0045, modified multi-dimensional vector provided to decoder/generator network that generates a realistic synthetic image base on the mapped image and adjusted values of features using neural network trained to generate realistic synthetic faces based on multi-dimensional vector of learned image feature; paragraph 0047, decoder/generator from generative adversarial network; i.e. where randomly selecting a set of features to remove is equivalent to randomly selecting a set of features to retain (i.e. those not selected for removal) and discarding the non-selected features (i.e. removing the random featuresl)).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Moloney, Ozcan, Bannerjee, Nakazawa, and Todorov in front of him to have modified the teachings of Moloney (directed to a deep learning system), Ozcan (directed to deep learning microscopy using a training dataset of images), Banerjee (directed to synthetic data for neural network training), and Nakazawa (directed to data augmentation), to incorporate the teachings of Todorov (directed to generating synthetic images using GAN components) to utilize the capability to attenuate the feature embedding/vector of the image by randomly selecting a set of features to remove (and therefore also randomly selecting the set of features to retain, while discarding the features to be removed). One of ordinary skill would have been motivated to perform such a modification in order to provide a fast and accurate paradigm shift in photo manipulation as described in Todorov (abstract).
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/JEREMY L STANLEY/
Primary Examiner, Art Unit 2127