DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-10 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claim(s) 1 and 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al (US20210406996) in view of Abe et al (US20080065572).
Regarding claims 1 and 9-10, Yu teaches an array-type facial beauty prediction method, comprising:
extracting a plurality of facial beauty features of different scales from a face image by means of a plurality of feature extractors;
(Yu, Fig. 3; “The 300 network comprises a convolutional neural network (CNN) 302 for processing a source image at an input layer 304. In an embodiment, CNN 302 is configured using a residual network based backbone having a plurality of residual blocks 306, to extract the shared features”, [0036]; “a deep learning supervised regression based model including methods and systems and/or computing devices for facial attribute prediction”, [0005]; “An image of a face can be analyzed to predict multiple attributes (generally denoted as facial attributes), such as lip size and shape, eye color, etc”, [0003]; Table 1, [0024]; extracting features using a CNN with residual blocks (feature extractors); ResNet architectures inherently extract features at "different scales" (low to high level) across the blocks; predicting beauty-related attributes like lip size, eye shape, and face shape)
performing array-type fusion on the plurality of facial beauty features of different scales to obtain a plurality of fused features;
(Yu, Fig. 3; “A flattened feature vector 308 (for example, using average pooling) is obtained from the backbone net 302”, [0037]; “CNN 302 is configured using a residual network based backbone having a plurality of residual blocks 306, to extract the shared features. By way of example, the residual network based backbone is configured using ResNet”, [0036]; “a convolutional neural network (CNN) model comprising residual blocks performing deep learning to produce a feature vector of shared features for classification by respective classifiers to predict the facial attributes”, [0057]; combining the features from the CNN backbone into a "feature vector" (array) of "shared features" (fused features))
performing binary classification processing on the plurality of fused features multiple times by means of a facial beauty classification network to obtain a plurality of classification results,
(Yu, Fig. 3; “The feature vector 308 is duplicated for processing (e.g. in parallel) by a plurality (K) of classifiers 310 for each of the K facial attributes”, [0038]; “Dark Circles No, Yes”, [Table 1]; “The plurality of respective classifiers, in an embodiment, perform in parallel to provide the facial attributes”, [0057]; using a network with K classifiers (multiple times) to produce K results; at least some attributes are binary (e.g., Dark Circles: No/Yes); Abe further supports the use of binary algorithms for multi-class problems: "solve multi-class cost-sensitive learning problems using a binary classification algorithm", [Abstract]; "calling the component classification algorithm on a modified binary classification problem", [0010])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Abe into the system or method of Yu in order to simplify complex multi-attribute tasks (like "black hair" vs. "not black hair") and improve accuracy by focusing models on one specific feature using binary classification for face attributes in parallel systems. The combination of Yu and Abe also teaches other enhanced capabilities.
The combination of Yu and Abe further teaches:
wherein the facial beauty classification network is obtained by means of supervised training using a cost-sensitive loss function, and the cost-sensitive loss function is a loss function that is set according to cost-sensitive training labels; and
(Abe, Fig. 2; minimizing cost of the classifier, eq. (1), [0020]; “A popular formulation of the cost-sensitive learning problem is via the use of a cost matrix. A cost matrix, C(y1,y2), specifies how much cost is incurred when misclassifying an example labeled y2 as y1, and the goal of a cost-sensitive learning method is to minimize the expected cost”, [0019]; Yu, “the supervised learning”, [0041])
making a decision on the basis of the plurality of classification results to obtain a facial beauty prediction result.
(Yu, Fig. 15; “Operations at 1502 determine a plurality of facial attributes from a source image of a face, processing the source image using a facial attribute classifying network model”, [0124]; “a recommendation component to recommend make-up products responsive to the facial attributes”, [0005]; determining the attributes/recommendations; Abe, “outputs a classifier hypothesis which is the average of all the hypotheses output in the respective iterations”, [0009]; making an ensemble decision based on multiple results)
Allowable Subject Matter
Claim(s) 2-8 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening Claim(s).
The following is a statement of reasons for the indication of allowable subject matter:
Claim(s) 2-3 and 5 recite(s) limitation(s) related to extracting multi-scale features using three specific models: CNN, width learning system, and transformer; fusing extracted features by arranging them in an array and combining every pair together; and training utilizing multi-dimensional beauty labels, binary classification task decomposition, and cost-sensitive loss functions. There are no explicit teachings to the above limitation(s) found in the prior art cited in this office action and from the prior art search.
Claim(s) 4 and 6-8 depend on claims 3 and 5, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIANXUN YANG/
Primary Examiner, Art Unit 2662 1/20/2026