DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendment filed December 15th 2025 has been entered and made of record. Claims 1, 9, 12 and 13 are amended. Claims 1-20 are pending.
Applicant’s remarks in view of the newly presented amendments have been considered but are not found to be persuasive for at least the following reasons:
Applicant has amended the independent claims 1, 9 and 13 to include the newly added limitation of:
refine the machine learning model, via transfer learning, using a labeled set of captured images of real faces, the labeled set of captured images alone being insufficient to train a model to achieve a threshold accuracy.
Applicant argues that Nikolenko does not disclose transfer learning as claimed. Examiner disagrees. Nikolenko disclose transfer learning in the form of domain adaptation or domain transfer learning in paragraph [0051].
[0051] In accordance with the various aspects of the invention, synthetic datasets are implemented for domain adaptation and domain transfer techniques for training of machine learning models. Domain adaptation is the problem of leveraging labeled data in a source domain to learn and train an accurate model in a target domain, wherein the labels are scarce or unavailable. In regard to using synthetic datasets, domain adaptation is applied to a machine learning model trained on one data distribution, which is a source domain (in this case, the domain of synthetic data), so that the model solves similar problems on a dataset of different nature, which is a target domain (in this case, the domain of real data). In accordance with one aspect of the invention, unsupervised domain adaptation is used when labeled data is available in the source domain and not in the target domain; the target domain has only unlabeled data available. In accordance with one aspect of the invention, the system applies unsupervised domain adaptation in situations where the source domain is the domain of synthetic data, which has abundant and diverse range of labeled data, and the target domain is the domain of real data, which includes a large dataset that may be unlabeled.
The domain transfer disclosed by Nikolenko is directed to using a learning album in one domain (synthetic image data) to enhance machine learning in another domain (real image data). Nikolenko also disclose that the purpose of creating a hybrid data set is to improve overall accuracy of recognition. See paragraph [0020].
[0020] In accordance with the various aspects of the invention, improved training of machine learning models is achieved using supplementing of real data with the synthetic dataset. This improves training of machine learning models for computer vision tasks, including, but not limited to, image classification, object detection, image segmentation, and scene understanding.
In addition, Nikolenko discloses that real images with labels are used. See paragraph [0037].
[0037]…At step 252 images are collected or captured. The images include at least one object that is the subject or target of the training for the model. In accordance with some aspects of the invention, real images, which include known labels, are collected or used. The real images include objects. The individual objects in the real image are identified. In accordance with various aspects of the invention, synthetic images with known labels can be collected and segmented.
Both the synthetic images and the real images are used as input to the hybrid neural network 408 in Fig. 4 to create a refined neural network 414.
Applicant argues that Examiner does not address the claim limitation “the labeled set of captured images alone being insufficient to train a model to achieve a threshold accuracy.” This limitation is a negative limitation (“insufficient to train a model to achieve threshold accuracy”) without any real defined bounds. How is the threshold for accuracy defined? It is implied that the labeled set of images are not sufficient for training the model, hence the use of the other synthetic images in the claim steps. It is also not defined how a threshold of accuracy is achieved. Every neural network being trained to perform image recognition seeks to achieve a level of accuracy by adjusting the weights of the neural network over time, but there is no recitation of how many images or what quality of the images are that would define the images as being insufficient to train a model to a level of accuracy. The set of images are therefore defined by what they are not; i.e. sufficient to train a model to achieve a threshold accuracy. The image set or images are not defined by what they are. Furthermore, there is no discernible definition of any limitations what make the images either sufficient or insufficient because there is no discernable definition of what would be required to train the model to a degree to make them sufficient.
The rejection in view of Nikolenko is maintained and accordingly made FINAL.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-6, 8-18 and 20 are rejected under 35 U.S.C. 102(a) as being anticipated by USPN 2020/0320346 to Nikolenko.
With regard to claim 1, Nikolenko discloses a system for comprising:
a processor (paragraph [0053]);
memory, including instructions, which when executed by the processor (paragraph [0053]), cause the processor to:
obtain an unlabeled set of digitally generated facial images (paragraph [0024], A training dataset is created entirely of synthetic images in one embodiment of the invention);
generate, from a plurality of facial images of the unlabeled set, a plurality of sets of cropped images, each cropped image in the plurality of sets of cropped images including a portion of a face of an image representing a respective set (paragraph [0019] and [0026]-[0029], Cropped image portions are generated from the images in the form of superpixels which are groupings of pixels representing facial image regions);
precondition a machine learning model using the plurality of sets of cropped images (paragraphs [0029]-[0031], The generated synthetic image regions are used as a training dataset ; and
refine the machine learning model, via transfer learning using a labeled set of captured images of real faces, the labeled set of captured images alone being insufficient to train a model to achieve a threshold accuracy (paragraphs [0019]-[0020] and [0031]-[0035], The generation of synthetic facial image data is used to create and augment training datasets. “As new training datasets are used to further train the model, the model is further improved and/or enhanced” (paragraph [0035]). Nikolenko disclose transfer learning in the form of domain adaptation or domain transfer learning in paragraph [0051].
[0051] In accordance with the various aspects of the invention, synthetic datasets are implemented for domain adaptation and domain transfer techniques for training of machine learning models. Domain adaptation is the problem of leveraging labeled data in a source domain to learn and train an accurate model in a target domain, wherein the labels are scarce or unavailable. In regard to using synthetic datasets, domain adaptation is applied to a machine learning model trained on one data distribution, which is a source domain (in this case, the domain of synthetic data), so that the model solves similar problems on a dataset of different nature, which is a target domain (in this case, the domain of real data). In accordance with one aspect of the invention, unsupervised domain adaptation is used when labeled data is available in the source domain and not in the target domain; the target domain has only unlabeled data available. In accordance with one aspect of the invention, the system applies unsupervised domain adaptation in situations where the source domain is the domain of synthetic data, which has abundant and diverse range of labeled data, and the target domain is the domain of real data, which includes a large dataset that may be unlabeled.
The domain transfer disclosed by Nikolenko is directed to using a learning album in one domain (synthetic image data) to enhance machine learning in another domain (real image data). Nikolenko also disclose that the purpose of creating a hybrid data set is to improve overall accuracy of recognition. See paragraph [0020].
[0020] In accordance with the various aspects of the invention, improved training of machine learning models is achieved using supplementing of real data with the synthetic dataset. This improves training of machine learning models for computer vision tasks, including, but not limited to, image classification, object detection, image segmentation, and scene understanding.
In addition, Nikolenko discloses that real images with labels are used. See paragraph [0037].
[0037]…At step 252 images are collected or captured. The images include at least one object that is the subject or target of the training for the model. In accordance with some aspects of the invention, real images, which include known labels, are collected or used. The real images include objects. The individual objects in the real image are identified. In accordance with various aspects of the invention, synthetic images with known labels can be collected and segmented.
Both the synthetic images and the real images are used as input to the hybrid neural network 408 in Fig. 4 to create a refined neural network 414.
With regard to claim 2, Nikolenko discloses the system of claim 1, wherein the portion of the face is less than an entirety of the face (paragraph [0019] and [0026]-[0029], Cropped image portions are generated from the images in the form of superpixels which are groupings of pixels representing facial image regions).
With regard to claim 3, Nikolenko discloses the system of claim 1, wherein the digitally generated facial images are synthetic and are not depictive of real faces (paragraph [0024], A training dataset is created entirely of synthetic images in one embodiment of the invention).
With regard to claim 4, Nikolenko discloses the system of claim 1, wherein the cropped images are labeled as belonging to respective digitally generated identities (paragraph [0028], Specific and key features are identified and labeled of the image segments).
With regard to claim 5, Nikolenko discloses the system of claim 1, wherein to generate the plurality of sets of cropped images, the instructions are further to cause the processor to randomly crop each of the unlabeled set a specified number of times (paragraph [0028]-[0029], The synthetic image positions or superpixels are such that each pixel is precisely labeled and associated with the subject or feature identified. There are a finite number of facial features such as eyes, nose, mouth, etc.).
With regard to claim 6, Nikolenko discloses the system of claim 1, wherein the plurality of sets of cropped images include images with overlapping cropping (paragraph [0028]-[0030], The superpixels are altered in the act of creating new synthetic images and image portions. Alterations are also accordingly made to the segments represented by the superpixels. In accordance with facial image data and the nature of undefined borders between segments of the faces, there would be a very high likelihood of overlapping cropping or segmenting).
With regard to claim 8, Nikolenko discloses the system of claim 1,wherein the machine learning model is trained to output an identifier from an input cropped image (paragraph [0028], Specific and key features are identified and labeled of the image segments. The learning model uses the image portions or superpixel to train and output facial image recognition).
With regard to claim 9, the discussion of claim 1 applies.
With regard to claims 10-12, the discussions of claims 2-4 apply respectively.
With regard to claim 13, the discussion of claim 1 applies.
With regard to claims 14-18 and 20, the discussions of claims 2-6 and 8 apply respectively.
Allowable Subject Matter
Claims 7 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
FINAL REJECTION
Applicant’s amendment necessitated the grounds of rejection presented in the Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESLEY J TUCKER/Primary Examiner, Art Unit 2661