Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/13/2026 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/26/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
The amendments and remarks filed on 3/13/2026 were received and considered.
Claims 1, 8-12 have been amended.
Claim 7 has been cancelled.
Claims 1, 3-6, 8-12 are pending.
Upon entry of the claim amendments, the rejection of claims 1, 3-6, 8-12 under USC 101 has been withdrawn.
Response to Arguments
Applicant’s arguments, see Remarks, filed 3/13/2026, with respect to the rejection(s) of claim(s) 1, 3-6, 8-12 under USC 103 have been fully considered. However, upon further consideration, a new ground(s) of rejection is made in view of (“3D Nose shape net for human gender and ethnicity classification” by Chenlei et al.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 4, 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over (“NOSE SHAPE ESTIMATION AND TRACKING FOR MODEL-BASED CODING” by Yin et al.) referred to as Yin hereinafter and further in view of Kilic et al. (US 20240252104 A1) referred to as Kilic hereinafter and (“3D Nose shape net for human gender and ethnicity classification” by Chenlei et al.) referred to as Chenlei hereinafter.
Regarding claim 1, Yin teaches An estimating method, (“facial feature regions and shapes on various organs are estimated,” Yin, p. 1480) comprising: acquiring an image including a nose of a user; (“Figure 5 shows some sample frames with the detected nostrils and nose sides.” Yin, p. 1480) and figure 2.
extracting a nose region from the acquired image; (“the feature shape of a nose can be extracted in the limited feature regions using pre-defined deformable templates.” Yin, p. 1477, 1. Introduction)
calculating, for the extracted nose region, one or more nose features based on pixel values of the extracted nose region; (“From the detection of the nose regions in the previous section using color-based region growing, the initial width of the nostril can be determined by calculating the distance between the left-most pixel and the right-most pixel in the nostril region.” Yin, p. 1479)
determining a nose characteristic of the user based on the one or more nose features; and (“The nostril on the right side and the left side of a nose can be represented by the up-right curve and the up-left curve, respective1y.Parameter ꭤ is the width of a nostril. Parameter s controls the shape of the curve, which can take a real value in the range [1,10]. The smaller the s value, the thicker the leaf-shape appears.
PNG
media_image1.png
36
237
media_image1.png
Greyscale
Yin, p. 1479)
estimating a shape relating to a facial skeleton of the user based on the determined nose characteristic, (“The feature area is represented by a high value while the background area is represented by value zero, as shown in Figure 2 (col 2). The size of the detected feature organs is determined by the pixels of the leftmost, right-most, top and bottom of the feature region. By using local region growing, feature areas with clear boundaries can be obtained, as shown in Figure 2 (col 3).” Yin, p. 1478) and figure 2.
wherein the one or more nose features are selected from:(i) an average pixel value,
(ii) a count of pixels lower than or equal to a predetermined value or higher than or equal to the predetermined value, (“The image is partitioned based on the checker-board distance in region growing, the distance D between seed pixel (ys, us,, vs) and the growing pixel (yi, ui, vi) is defined in Formula 1. Larger variations are allowed in the luminance component of a region than in the chrominance component, in order to ignore small changes in luminance caused by shading. Therefore weights (1/4,1,1) are used for YUV. Regions smaller than a certain number of pixels are not taken into consideration, and are removed.
PNG
media_image2.png
53
445
media_image2.png
Greyscale
Yin, p. 1477-1478)
(iii) cumulative pixel values,(iv) a pixel-value change quantity, and(v) a pattern of average pixel values
However, Yin does not teach A skeleton estimating model.
Kilic teaches skeletal estimating model, (“classify the skeleton syndromes of the person in accordance with the deviation of the surface area of jaw area and maxillary area from the predetermined threshold values.” Kilic, para. [0024])
Yin and Kilic are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yin in light of Kilic’s skeleton estimating model. One would have been motivated to do so because it can enable to diagnose skeleton class III malocclusions with increased accuracy. (Kilic, para. [0011])
However, the combination of Yin and Kilic does not teach estimating a shape, using (a) either a trained model generated by machine learning using training data in which input data includes the nose characteristic and output data includes the shape relating to the facial skeleton, or (b) a database associating the nose characteristic with the shape relating to the facial skeleton,
Chenlei teaches estimating a shape, using (b) a database associating the nose characteristic with the shape relating to the facial skeleton, (“Using the nose similarity measurement method, we can construct a nose similarity matrix from a facial database. Based on the nose similarity matrix, we propose the 3D nose shape net (3DNSN). The 3DNSN represents an organized framework of 3D noses. Different noses in 3DNSN are divided into different classes by the nose similarity measure matrix. The 3DNSN includes nose similarity information with different shapes. To build a gender and ethnicity classifier, we propose an estimation function based on the 3DNSN. In the following paragraphs” Chenlei, p. 54, 5. 3D nose shape net construction)
Yin, Kilic, and Chenlei are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yin and Kilic in light of Chenlei’s database associating the database with nose characteristic. One would have been motivated to do so because it can result in an accurate geometric analysis models. (Chenlei, p. 56, 6.3 Comparison and summary)
Regarding claim 3, the combination of Yin and Kilic does not teach wherein the estimating includes sorting the shape relating to the facial skeleton of the user.
However, Chenlei teaches wherein the estimating includes sorting the shape relating to the facial skeleton of the user. (“In Tables 3 and 4 , we show comparisons of classification results from different groups in FRGC2.0.” Chenlei, p. 56, 6.2. Data sensitive estimation and Table 3)
Yin, Kilic, and Chenlei are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yin and Kilic in light of Chenlei’s sorting the shape. One would have been motivated to do so because it can result in an accurate geometric analysis models. (Chenlei, p. 56, 6.3 Comparison and summary)
Regarding claim 4, the combination of Yin and Kilic does not teach wherein the estimating includes determining, based on the estimated shape relating to the facial skeleton, a face type corresponding to a face of the user among a plurality of predefined face types.
However, Chenlei teaches wherein the estimating includes determining, based on the estimated shape relating to the facial skeleton, a face type corresponding to a face of the user among a plurality of predefined face types. (“In summary, the gender and ethnicity classification process based on nose shape analysis is feasible. The method can achieve similar classification rates to other methods which are based on global facial data. Our method classification results will vary depending on the nose similarity measurements available in a dataset.” Chenlei, p. 56, 6.2. Data sensitive estimation and Table 4)
Yin, Kilic, and Chenlei are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yin and Kilic in light of Chenlei’s determining a face type. One would have been motivated to do so because it can result in an accurate geometric analysis models. (Chenlei, p. 56, 6.3 Comparison and summary)
Regarding claim 8, refer to the explanation of claim 1.
Regarding claim 9, Kilic teaches A non-transitory computer-readable recording medium storing instructions that, when executed by a processor coupled to a memory, cause the processor to (“Said processing unit (110) is associated with a memory unit (120) so that it can read and write data. The memory unit (120) further comprises both software consisting of command lines executed by the processing unit (110) and the machine learning models. When executed by the processing unit (110) said software ensures the process steps enabling the system of the invention to identify to be performed.” Kilic, para. [0028])
Regarding rest of claim 9, refer to the explanation of claim 1.
Regarding claim 10, refer to the explanation of claim 1.
Regarding claim 11, Yin teaches a nose region extracted from an image including a nose of the user; (“The region growing is performed by selecting the central point of the head area as a seed pixel. The size of grown region must be examined to ensure that it is a reasonable face region; if it exceeds a predefined range, a new seed is selected from the neighboring pixels, or the threshold is adjusted, to generate a new region. This process is iterated until a skin region is found. After the skin region growing, a number of regions (blobs) are obtained. In order to extract the facial organ blobs (e.g., eyes, mouth, nose, etc.), the top blobs and the bottom blobs (e.g., hair, collar, cloth, etc.) are removed.” Yin, p. 1478, 2.1 Feature center location by global region growing)
Regarding rest of claim 11, refer to the explanation of claim 1.
Regarding claim 12, refer to the explanation of claims 1 and 11.
Claim(s) 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Yin, Kilic, and Chenlei as mentioned above and further in view of Imai (US 9330298 B2).
Regarding claim 5, the combination of Yin, Kilic, and Chenlei does not teach wherein the shape relating to the facial skeleton of the user includes a shape of the facial skeleton of the user, a shape of a face of the user influenced by the facial skeleton of the user, or both of the shape of the facial skeleton of the user and the shape of the face of the user influenced by the facial skeleton.
Imai teaches wherein the shape relating to the facial skeleton of the user includes a shape of the facial skeleton of the user, a shape of a face of the user influenced by the facial skeleton of the user, or both of the shape of the facial skeleton of the user and the shape of the face of the user influenced by the facial skeleton. (“the face impression determining unit 60 refers to the tendency information storage unit 74, and acquires, from the tendency information PI (see FIG. 5), the plus 1σ value for each of the dimensions of the bases associated with the patterns of part or all of the impression tendencies selected by the subject. The face impression determining unit 60 uses the plus 1σ value and the weighting factors calculated in step S44 to determine the degrees of the impression tendencies of the facial shape of the subject, as in the first method (FIG. 7: step S42). In the aesthetic information output step S50, the patterns of the impression tendencies of the facial shape of the subject determined as described above, and the degrees of the patterns are transmitted from the aesthetic information transmitting unit 82 to the subject's terminal 110.” Imai, col. 19, lines 47-61)
Yin, Kilic, Chenlei, and Imai are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yin, Kilic, and Chenlei in light of Imai’s shape of a face influenced by a facial skeleton. One would have been motivated to do so because it can help accurately measure the three-dimensional coordinates on the surface of the head. (Imai, col. 11, lines 1-2)
Regarding claim 6, the combination of Yin, Kilic, and Chenlei does not teach wherein the nose feature is at least one selected from the group consisting of a nasal root, a nasal bridge, a nasal apex, and nasal wings.
Imai teaches wherein the nose feature is at least one selected from the group consisting of a nasal root, a nasal bridge, a nasal apex, and nasal wings. (“The anatomical feature points include, for example, the orbitale, the center of a supraorbital margin, the point located in the inner margin of an orbit, the portion, the nasal root point, and the zygion” Imai, col. 11, lines 27-30)
Yin, Kilic, Chenlei, and Imai are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yin, Kilic, and Chenlei in light of Imai’s selecting a nose feature. One would have been motivated to do so because it can help accurately measure the three-dimensional coordinates on the surface of the head. (Imai, col. 11, lines 1-2)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PARDIS SOHRABY/ Examiner, Art Unit 2664
/CHARLOTTE M BAKER/ Primary Examiner, Art Unit 2664