DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This office action follows the office action of “Reopen prosecution after Quick Path Information Disclosure Statement (QPIDS) request after Notice of Allowance” mailed on 01/20/2026. Accordingly, claims 1, 3-4, 6-8, 11-12, and 14-16 received on 04/29/2025 are pending and being examined. Claim 1 is independent form.
Claim Rejections - 35 USC § 102
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
5. Claims 1, 3-4, 6-8, 11, and 14-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Liu et al (CN110414428, which is recited by the applicant in IDS submitted on 12/31/2025, hereinafter “Liu”). A machine translated English version (i.e., document CN110414428-Eng) of document CN110414428 is provided by the examiner with this office action.
Regarding claim 1, Liu discloses an image attribute classification method (the method for generating a face attribute information identification model including the segmentation and classification networks; see figs.1-5 and pg.2, lines 20-22 in the document CN110414428-Eng), including:
acquiring a first rectangular position area corresponding to a target attribute of an image before inputting the image to a convolutional neural network (see the rectangle mask image shown by fig.3, and the corresponding mask image generated based the original input face image disclosed in the left column of fig.4; see S210 of fig.2 and pg.2, lines 24-28. It should be noticed that the input face image disclosed in the left top of fig.4 will be input into the pre-trained CNN; see pg.2, lines 28-29);
inputting the image to the convolutional neural network to obtain a first feature map by a convolution layer of the convolutional neural network (wherein the 1st layer cov1 of the CNN outputs a 1st facial feature map of size 32x64x64 based on the input face image of size 3x128x128; see Table 2 of the document CN110414428, rows 2-3; see the document CN110414428-Eng, pg.11, lines 17-27) and N times down-sampling the first feature map by a cellularization layer of the convolutional neural network to obtain a second feature map (wherein the 4th layer cov_4 of the CNN outputs a facial feature map of size 128x16x16 based on the 1st facial feature map of size 32x64x64 after 4 time’s (N=4) down-sampling processing (from 64x64 down-size to 16x16); see Table 2 of the document CN110414428, rows 1-9; see the document CN110414428-Eng, pg.11, lines 17-27), wherein the second feature map includes a plurality of attributes and the target attribute of the image occupies a second rectangular position area in the second feature map (wherein the facial feature map extracted by the feature extraction layer cov_4 of the CNN may include the facial attributes, such as eyelid, glasses, and hair regions of a person’s face in the image; see the input face image shown by fig.4; see the document CN110414428-Eng, pg.8, lines 1-5, and lines 28-30; see pg.11, lines 17-21);
calculating a mask function of the target attribute of the second feature map based on the second rectangular position area (wherein the 5th layer cov_5 of the CNN outputs k attribute mask images (i.e., kx16x16), each of which represents the sematic segmentation map of one attribute (i.e., one class), such as the eyelid, glasses, and hair regions of the person’s face in the image; see Table 2 of the document CN110414428, the second last row; see the document CN110414428-Eng, pg.11, lines 17-27);
obtaining a feature corresponding to the target attribute by dot multiplying the second feature map with the mask function (wherein the 6th layer cov_6 of the CNN outputs k feature region maps (i.e., k x 128x16x16) by dot-product processing each of the k attribute mask images with the feature map of size 128x16x16 extracted from the input target face image; see Table 2 of the document CN110414428, the last row; see the document CN110414428-Eng, pg.11, lines 17-27); and
inputting the feature corresponding to the target attribute to a corresponding attribute classifier for attribute classification to avoid interference from other attributes (inputting each of the k feature (or attribute) region maps into the corresponding identification (or classification) network for processing to obtain the information of each facial attribute in the input face image; see “the human face attribute information identification method” shown by fig.5, see S530, pg.12, line 42—pg.13, line 9; wherein the attribute) regions include eyelid area, glasses area, beard area, hair area, and the like, see pg.13, lines 11-15),
wherein left upper corner coordinates and lower right corner coordinates of the second rectangular position area are 1 / N of the left upper left corner coordinates and the lower right corner coordinates of the first rectangular position area respectively (wherein the size of the feature image outputted from the 4th layer cov_4 of the CNN is 16x16 which is decreased by N=4 times against the size (64x64) of the feature image outputted from the 1st layer cov_4 of the CNN. Therefore, wherein left upper corner coordinates and lower right corner coordinates of the second rectangular position area are 1 / 4 of the left upper left corner coordinates and the lower right corner coordinates of the first rectangular position area respectively).
Regarding claim 3, Liu discloses the image attribute classification method of claim 1, wherein the acquiring a first rectangular position area of the target attribute of the image comprises: acquiring the position coordinates of key points of the target attribute of the image; and acquiring the first rectangular position area of the target attribute using the position coordinates of several key points of the most boundary of the at least one attribute (“feature point detection model”, see pg.9, lines 28-44: “firstly, based on feature point detection model to obtain the position and the characteristic point coordinate of the human face. Then, based on the face position and the characteristic point coordinate for face correction, by cutting to obtain the human face image with the same size. may be used, for example, the coordinates of the two eyes as the standard of straightening, straightening after cutting the original image to obtain face region 1, obtaining included angle of straight line formed by two eyes, rotating according to the reverse direction of the included angle for the image. after the rotating key of the first step according to a rotation formula, obtaining the coordinate of characteristic point after rotating; 2, after straightening, eyes, nose, mouth longitudinal distance, estimating the height to be cut face; 3, determining the height after the set aspect ratio, etc. determining the width of the face to be cut off, 4. The transverse distance than the nose and two eyes, calculating the specific coordinates of the left and right sides of the rotating angle to the positive face, finally the zoom scale is 128 * 128 to the face area. method for cutting the face area is merely exemplary, it also can use image processing software for cutting processing, the solution is not limited. ”).
Regarding claim 4, Liu discloses the image attribute method of claim 3, wherein a value of the mask function is 1 in the second rectangular position area, and the value other than the second rectangular position area is 0 (see “the mask image is a binary image”, pg.12, lines 17-20).
Regarding claim 6, Liu discloses the image attribute classification method of claim 1, wherein the image is a face image, and wherein the target attribute is from eyes, eyebrows, nose, mouth, face type, hairstyle, the beard and a jewelry wearing situation (see pg.13, lines 11-15: “a face attribute may include the 11 eyelid, glasses, beard, hat, hair. information of human face attribute may include 12 information in one or more of a single eye, double and the eyelid, and/or lens size, color, 13 existence of the frame, shape, size, thickness, and/or beard length, type, concentration, 14 density, and/or cap of the style, color, and/or hair length, hair color,”).
Regarding claim 7, Liu discloses the image attribute classification method of claim 6, wherein the corresponding attribute classifier comprises an eye classifier, an eyebrow classifier, a nose classifier, a mouth classifier, a face type classifier, a hairstyle classifier, a beard classifier and jewelry wearing condition classifier (see the class mask image shown by fig.3 which includes an eye classifier, an eyebrow classifier, a nose classifier, a mouth classifier, a face type classifier, a hairstyle classifier, a beard classifier and jewelry wearing condition classifier; wherein the eyelid class is further classified into single eyelid and double eyelid, the hairstyle class is further classified into long hair and short hair, see Table 1, pg.8, lines 25-46).
Regarding claim 8, Liu discloses the image attribute classification method of claim 6, wherein N is 4 or 8 (wherein the image of size 64x64 is down-sized into the image of size16x16, so N=4, see Table 2).
Regarding claim 11, Liu discloses the image attribute classification method of claim 1, wherein the corresponding attribute classifier is implemented by a convolution layer and a full connecting layer of a second convolutional neural network (see the CNN shown by fig.4 and Table 2, wherein the second layer (the cov_2) is fully connected by the output (the Block1) of the first layer (the cov1)).
Regarding claims 14-16, each of them is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth above in the rejection of claim 1.
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Liu and in view of Du (US 11,487,995, hereinafter “Du”).
Regarding claim 12, Liu does not disclose, wherein a double linear interpolation size of the image is [224, 224] as recited in the claim. However, this feature is well-known and widely used in the field of face recognition in images by convolutional neural networks (CNNs). As evidence, in the same field of endeavor, Du teaches a convolutional neural network which may include convolutional layers for down-sampling the input image and deconvolutional (i.e., interpolation) layers for up-sampling the input image. See col.12, line 63—col.13, line 3: “The convolutional neural network here may include, for example, five convolutional layers and five deconvolutional layers. The convolutional layer may be used for downsampling inputted information with a preset window sliding step. The deconvolutional layer may be used for upsampling the inputted information with a preset amplification factor. The window sliding step may be 2, and the amplification factor may be 2.” It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Du into the teachings of Liu and add deconvolutional layers into the CNNs for upsampling the inputted information with a preset amplification factor to reach size [224, 224] as taught by Du. Suggestion or motivation for doing so would have been to “to obtain probabilities of each pixel comprised in the face image belonging to a category indicated by each category identifier in a preset category identifier set” as taught by Du, see Abstract.
Response to Arguments
8. Applicant's arguments submitted on 04/29/2025 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
9. Applicant’s submission of an information disclosure statement under 37 CFR 1.97(c) with the fee set forth in 37 CFR 1.17(p) on 12/31/2025 prompted the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 609.04(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676