DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 11/24/2025 has been entered. Applicant’s amendments to the drawings and claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed 08/25/2025. Claims 1, 3-5, 7-11, 13-15, and 17-20 remain pending in the application, with claims 2, 6, 12, and 16 having been cancelled.
Response to Arguments
Regarding claim 1, Applicant’s arguments in the remarks filed 11/24/2025, on pg. 11 through paragraph 4 on pg. 12, are moot because the amendment (referred to as “the above-emphasized technical feature of the amended claim 1” in the remarks) warrants a new ground of rejection. In previously presented claim 1, the claim language did not require the current adjustment coefficient to adjust the original color value in a product operation, as recited in amended claim 1.
However, applicant argues at the top of pg. 12 that Yuan does not disclose an adjustment coefficient involved in the adjustment operation. Examiner respectfully disagrees. Yuan discloses in para 183: “the color value of the special effect to be added is red, the transparency is 20 (decimal)”. The examiner further references para 184: “When (r, g, b, a) is (1.0, 0.0, 0.0, 0.2), the face image to be processed is processed accordingly based on the color value corresponding to the skin special effect. In essence, the color value corresponding to the skin special effect is superimposed on the original appearance to make the skin more rosy. In the facial image processed based on skin special effects, the skin is whiter and more rosy than the skin in the facial image to be processed, and the highlight part and skin texture of the skin are retained, making the processed facial image more natural.”. Thus, the adjustment color value is multiplied by a transparency decimal value to create a natural, blended color on the facial image. Therefore, this value can be considered an adjustment coefficient. However, as stated above, amended claim 1 warrants a new ground of rejection beyond the mapped adjustment coefficient in Yuan’s disclosure. The same rationale similarly applies to independent claims 11 and 20.
Applicant’s arguments, last paragraph of pg. 12 through pg. 14, have been considered but are moot because the new ground of rejection introduces prior art reference Reas, and therefore does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan (CN Patent No. 111784568 A) in view of Su (CN Patent No. 112766234 B), in further view of Reas et al. (Casey Reas; Ben Fry, "Color," in Processing: A Programming Handbook for Visual Designers and Artists, MIT Press, 2014, pp.38-49.), hereinafter Reas.
Regarding claim 1, Yuan teaches a beauty makeup special effect generation method (Yuan, pg. 17, para 39: “When receiving a user's instruction to add special effects to a facial image to be processed, the method can first perform corresponding processing on the original color values of each pixel point of at least one facial part in the facial image to be processed based on the special effects to be added corresponding to the special effects identifier to obtain a processed color value”), comprising:
receiving a color adjustment operation for a first beauty makeup special effect (Yuan, pg. 30, para 54: “As an example, the adjustment strategy may be: the corresponding color value in the special effect parameter may be the target color value”); and
in response to the color adjustment operation, determining a target color corresponding to the color adjustment operation (Yuan, pg. 30, para 54: “target color value”) and performing color adjustment according to the target color (Yuan, pg. 28, para 52: “Adding the special effect to be added to the facial image to be processed is actually adjusting the original color value of each pixel of at least one facial part in the facial image to be processed”; see also para 54);
wherein the performing color adjustment on the first beauty makeup special effect according to the target color comprises:
for at least part of color components (Yuan, pg. 27, para 51: “for an RGB color space, the color value of a pixel can be understood as the value of the pixel in the three color channels R, G, and B”) of at least part of pixels (Yuan, pg. 26, para 50: “the original color value of each pixel of at least one facial part in the facial image to be processed is processed accordingly to obtain a processed color value”) in the first beauty makeup special effect, determining a current adjustment coefficient corresponding to a current color component according to the target color (Yuan, pg. 104, para 171: “The color value and transparency of the special effect to be added Mask Color are expressed as: (r, g, b, a), where r represents the color value of the r channel, g represents the color value of the g channel, b represents the color value of the b channel, and a represents transparency”; pg. 133, para 237: “Replace the color values of other channels except the brightness channel in the first color value with the color values of the corresponding channels in the second color value to obtain a replaced color value”).
However, Yuan fails to teach 1) performing color adjustment on the first beauty makeup special effect to generate a second beauty makeup special effect (Yuan teaches applying the special effect to a facial image) and 2) taking a product of a color component value of the current color component, the current adjustment coefficient and a current transparency coefficient of the first beauty makeup special effect as a target color component value of the current color component.
Su teaches a first beauty makeup special effect (Su, pg. 10, para 14: “the first target material includes one or more of eyelash material, eyeliner material, eyeshadow material, blush material, eyebrow material, and facial contouring material”) and a second beauty makeup special effect (Su, pg. 37, para 64: “The second target material may be a target material generated based on the first target material”), further disclosing to generate a second beauty makeup special effect from the first beauty makeup special effect (pg. 30, para 56: “in response to a makeup operation on the face image to be processed, generating a second target material that matches a target part in the face image to be processed based on the selected first target material”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the first/second beauty makeup special effects of Su with the color adjustment method of Yuan in order to generate a second beauty makeup special effect that applies to the same facial areas of the image (Su, pg. 35, para 62: “step S11 can be used to generate a second target material that matches the target part in the facial image to be processed based on the first target material selected in the makeup operation”). Further, the method taught by Yuan changes the colors of pixels of the facial image to implement a first special effect. A person of ordinary skill in the art could change the colors of the first material effect of Su to produce a second material effect, in the same manner as the method of Yuan. Thus, Yuan in view of Su teaches performing color adjustment on the first beauty makeup special effect to generate a second beauty makeup special effect.
Yuan teaches adjusting a color component value of the current color component by using the current adjustment coefficient (Yuan, pg. 42, para 74: “color values” of the special effect parameters; pg. 30, para 55: “As an example, for example, the adjustment strategy corresponding to the cheek part is to adjust the original color value based on the color value A in the special effect parameters”) and a current transparency coefficient of the first beauty makeup special effect (Yuan, pg. 45, para 76: “transparency of the special effect”; see also para 183-184 referenced in the response to arguments), but fails to teach the claimed product operation. However, Reas teaches taking a product of a color component value of the current color component (Reas, value A, for example, in FIG. below), the current adjustment coefficient (Reas, value B, for example, in FIG. below) and a current transparency coefficient (Reas, See FIG. 4-2 and caption on pg. 44 – two color values are multiplied to generate new color values and a transparency factor can be multiplied by one of the colors as well). Yuan in view of Su discloses a base method for adjusting pixel color values, but does not specify specific methods for taking a product of the claimed coefficients/value. Reas teaches a known technique of taking a product of color component and transparency values to generate a different color value. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by Reas, in the same way to the first beauty makeup special effect and target color component in the method of Yuan in view of Su and achieved predictable results of adjusting pixel values based on influence by an adjustment color value and transparency value.
PNG
media_image1.png
480
625
media_image1.png
Greyscale
Regarding claim 11, Yuan teaches an electronic device, comprising: one or more processors; and a memory configured to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the following steps (Yuan, pg. 136, para 244: “Based on the same principle as the face image processing method in the embodiment of the present disclosure, an electronic device is also provided in the embodiment of the present disclosure, which may include but is not limited to: a processor and a memory; the memory is used to store computer operation instructions; the processor is used to execute the method shown in the embodiment by calling the computer operation instructions”). All further claim limitations are met and rendered obvious by Yuan in view of Su and Reas because the method steps of claim 1 are the same as claim 11.
Regarding claim 20, Yuan teaches a non-transitory computer-readable storage medium in which a computer program is stored, wherein the computer program, when executed by a processor, implements the following steps (Yuan, pg. 136, para 244: “Based on the same principle as the face image processing method in the embodiment of the present disclosure, an electronic device is also provided in the embodiment of the present disclosure, which may include but is not limited to: a processor and a memory; the memory is used to store computer operation instructions; the processor is used to execute the method shown in the embodiment by calling the computer operation instructions”). All further claim limitations are met and rendered obvious by Yuan in view of Su and Reas because the method steps of claim 1 are the same as claim 20.
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan in view of Su, Reas, and Huang (CN Patent No. 108198127 A).
Regarding claim 3 (dependent on claim 1), Yuan in view of Su and Reas teaches wherein the determining a current adjustment coefficient corresponding to a current color component according to the target color comprises:
determining, according to the target color, a first sub-adjustment coefficient corresponding to the current color component (Yuan, pg. 30, para 54: “color value”), but fails to teach a second sub-adjustment coefficient corresponding to a current pixel; and, therefore, also fails to teach calculating, on the basis of the first sub-adjustment coefficient and the second sub-adjustment coefficient, the current adjustment coefficient corresponding to the current color component.
However, Huang teaches a similar beauty effect method comprising adjusting pixel colors in an image (Huang, pg. 31, para 69: “mobile terminal can replace the original pixel value of the pixel point with the mixed hair color value corresponding to the pixel point, thereby achieving hair color adjustment of the image character”), further disclosing a second sub-adjustment coefficient corresponding to a current pixel (Huang, pg. 28, para 62: “grayscale value of each pixel point”; see formula in para 63-65 on pg. 29); and calculating, on the basis of the first sub-adjustment coefficient and the second sub-adjustment coefficient, the current adjustment coefficient corresponding to the current color component (Huang, pg. 28, para 62: “mix the target color value with the grayscale value of each pixel point in the feathering area to obtain the mixed color value corresponding to each pixel point in the feathering area”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the sub-adjustment coefficient/operation of Huang with the method of Yuan in view of Su and Reas in order to color the pixel based on the grayscale value, or intensity, of the current pixel (Huang, pg. 17, para 40: “hair color attribute information corresponding to the skin color attribute information of the character in the original image is determined, and then the hair area of the character is processed according to the hair color attribute information, which can improve the matching degree between the hair area and the skin color area in the image, thereby improving the image processing effect”).
Regarding claim 13 (dependent on claim 11), all claim limitations are met and rendered obvious by Yuan in view of Su, Reas, and Huang because the method steps of claim 3 are the same as claim 13.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan in view of Su, Reas, Huang, and Yang et al. (CN Patent No. 1915164 A), hereinafter ‘164.
Regarding claim 4 (dependent on claim 3), Yuan in view of Su, Reas, and Huang teaches wherein the determining, according to the target color, a first sub-adjustment coefficient corresponding to the current color component and a second sub-adjustment coefficient corresponding to a current pixel comprises:
determining the first sub-adjustment coefficient corresponding to the current color component according to a color component value of a corresponding color component of the target color (Yuan, pg. 30, para 54: “target color value”); and
determining the second sub-adjustment coefficient corresponding to the current pixel according to a first grayscale value of a target pixel corresponding to the current pixel in a target grayscale image (Huang, pg. 28, para 62: “grayscale calculation on each pixel point in the feathering area to obtain the grayscale value of each pixel point”), wherein pixels in the target grayscale image correspond to pixels in the first beauty makeup special effect (Huang, the feathering area corresponds to the hair area which is being adjusted, pg. 27, para 59).
Yuan in view of Su, Reas, and Huang fails to teach a second grayscale value corresponding to the target color. However, ‘164 teaches a method for extending the dynamic range of OCT imaging, thus improving the image quality and visualization of subtle details (‘164, pg. 24, para 34). ‘164 teaches a sub-adjustment coefficient corresponding to two different grayscale values (‘164, pg. 35, para 46: “For the grayscale values g1 and g2 of the same pixel in the two images, multiply them by the weight factors a1 and a2 respectively and add them together to obtain the grayscale value of the pixel in the synthesized image”). Doing so ensures that the resulting pixel grayscale value is informed by more than one image, at different intensities, and can improve the resolution perceived by the human eye (‘164, pg. 24-25, para 35: “The statistical law of the human eye's grayscale resolution ability shows that when the image grayscale value is very high or very low, the human eye has poor grayscale resolution ability; when the image grayscale is moderate, the human eye has strong resolution ability…Based on this visual characteristic of the human eye, the image can be processed accordingly: in low-grayscale and high-grayscale areas, the grayscale intervals can be stretched to make it easier for the human eye to distinguish; in medium-grayscale areas, the grayscale intervals can be appropriately compressed, and the remaining grayscale levels can be allocated to low-grayscale and high-grayscale areas.”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the sub-adjustment coefficient of ‘164 with the method of Yuan in view of Huang (Huang teaches combining the first sub-adjustment coefficient with a grayscale value. ‘164 teaches a grayscale value based on two different grayscale values.) in the combination of Yuan in view of Su, Reas, and Huang in order to improve the resulting image quality by adjusting pixels with grayscale values at different intensities (‘164, pg. 24, para 34: “by synthesizing these two images with different exposure amounts, it is possible to obtain all the detailed information from shallow to deep layers within the dynamic range of the CCD”). Doing so would allow adjustment of a color component based on the grayscale intensity of both the current pixel and target color.
Regarding claim 14 (dependent on claim 13), all claim limitations are met and rendered obvious by Yuan in view of Su, Reas, Huang, and ‘164 because the method steps of claim 4 are the same as claim 14.
Claims 5, 9-10, 15, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan in view of Su, Reas, and Yang et al. (U.S. Patent No. 2019/0251754 A1), hereinafter Yang.
Regarding claim 5 (dependent on claim 1), Yuan in view of Su and Reas teaches wherein the first beauty makeup special effect is a human image beauty makeup special effect (Yuan, pg. 6, para 13: “add special effects to the face image to be processed”), but fails to teach wherein the method further comprises: adjusting a position where the first beauty makeup special effect is added in a corresponding human image part in response to a position adjustment operation for the first beauty makeup special effect.
However, Yang teaches a similar method (Yang, abstract: “The makeup application device generates a command based on the user input from the makeup professional for applying a virtual cosmetic effect”) comprising: adjusting a position where the first beauty makeup special effect is added in a corresponding human image part in response to a position adjustment operation for the first beauty makeup special effect (Yang, user command, para 36: “the makeup application device 102 transmits the command to the client device whenever user input is obtained from the makeup professional, where the command comprises locations of feature points where the virtual cosmetic effects are applied to the 3D facial model. The client device 122 (FIG. 1) maps the feature points where the virtual cosmetic effects are applied to the 3D facial model to feature points on the facial region of the user. The client device 122 then applies the same virtual cosmetic effects to the facial region of the user based on the mapped feature points”; see also para 37 where a zoom level is adjusted).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the makeup position adjustment operation, as taught by Yang, with the method of Yuan in view of Su and Reas in order to realistically visualize the makeup applied to a face based on user input (Yang, para 35: “the command causing a virtual cosmetic effect to be applied to the at least one digital image of the facial region of the user and display the at least one digital image. For some embodiments, displaying the at least one digital image comprises displaying a live video feed of the facial region of the user.”; para 37: “The command may also cause the client device to display application of the virtual cosmetic effect to the at least one digital image of the facial region of the user at the zoom level obtained from the makeup professional according to the enable input”).
Regarding claim 9 (dependent on claim 5), Yuan in view of Su, Reas, and Yang teaches wherein the adjusting a position where the first beauty makeup special effect is added in a corresponding human image part comprises:
adjusting key points corresponding to the first beauty makeup special effect in the corresponding human image part (Yang, para 36: “the makeup application device 102 transmits the command to the client device whenever user input is obtained from the makeup professional, where the command comprises locations of feature points where the virtual cosmetic effects are applied to the 3D facial model. The client device 122 (FIG. 1) maps the feature points where the virtual cosmetic effects are applied to the 3D facial model to feature points on the facial region of the user”); and
determining a position where the first beauty makeup special effect is to be moved according to the key points adjusted (Yang, para 36: “The client device 122 then applies the same virtual cosmetic effects to the facial region of the user based on the mapped feature points”).
Regarding claim 10 (dependent on claim 9), Yuan in view of Su, Reas, and Yang teaches wherein the position adjustment operation is a scaling coefficient adjustment operation (Yang, adjustment based on zoom level from the user, see citation below), and the adjusting key points corresponding to the first beauty makeup special effect in the corresponding human image part comprises:
scaling, according to a target scaling coefficient (Yang, amount of zoom, expressed by the zoom level) corresponding to the scaling coefficient adjustment operation, a three-dimensional model corresponding to the first beauty makeup special effect to adjust the key points corresponding to the first beauty makeup special effect in the three-dimensional model (Yang, para 34: “obtaining user input from the makeup professional for applying virtual cosmetic effects to the 3D facial model includes obtaining a zoom level from the makeup professional and displaying the 3D facial model according to the zoom level…The makeup application device 102 obtains a location on the 3D facial model and applies a cosmetic effect to the location on the 3D facial model.”), wherein the key points corresponding to the first beauty makeup special effect in the three-dimensional model correspond to the key points in the corresponding human image part (Yang, para 36: “makeup application device 102 transmits the command to the client device whenever user input is obtained from the makeup professional, where the command comprises locations of feature points where the virtual cosmetic effects are applied to the 3D facial model. The client device 122 (FIG. 1) maps the feature points where the virtual cosmetic effects are applied to the 3D facial model to feature points on the facial region of the user”).
Regarding claim 15 (dependent on claim 11), all claim limitations are met and rendered obvious by Yuan in view of Su, Reas, and Yang because the method steps of claim 5 are the same as claim 15.
Regarding claim 18 (dependent on claim 15), all claim limitations are met and rendered obvious by Yuan in view of Su, Reas, and Yang because the method steps of claim 9 are the same as claim 18.
Regarding claim 19 (dependent on claim 18), all claim limitations are met and rendered obvious by Yuan in view of Su, Reas, and Yang because the method steps of claim 10 are the same as claim 19.
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yuan in view of Su, Reas, Huang, and Yang.
Regarding claim 7 (dependent on claim 3), Yuan in view of Su, Reas and Huang teaches wherein the first beauty makeup special effect is a human image beauty makeup special effect (Yuan, pg. 6, para 13: “add special effects to the face image to be processed”), but fails to teach wherein the method further comprises: adjusting a position where the first beauty makeup special effect is added in a corresponding human image part in response to a position adjustment operation for the first beauty makeup special effect.
However, Yang teaches a similar method (Yang, abstract: “The makeup application device generates a command based on the user input from the makeup professional for applying a virtual cosmetic effect”) comprising: adjusting a position where the first beauty makeup special effect is added in a corresponding human image part in response to a position adjustment operation for the first beauty makeup special effect (Yang, user command, para 36: “the makeup application device 102 transmits the command to the client device whenever user input is obtained from the makeup professional, where the command comprises locations of feature points where the virtual cosmetic effects are applied to the 3D facial model. The client device 122 (FIG. 1) maps the feature points where the virtual cosmetic effects are applied to the 3D facial model to feature points on the facial region of the user. The client device 122 then applies the same virtual cosmetic effects to the facial region of the user based on the mapped feature points”; see also para 37 where a zoom level is adjusted).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the makeup position adjustment operation, as taught by Yang, with the method of Yuan in view of Su, Reas and Huang in order to realistically visualize the makeup applied to a face based on user input (Yang, para 35: “the command causing a virtual cosmetic effect to be applied to the at least one digital image of the facial region of the user and display the at least one digital image. For some embodiments, displaying the at least one digital image comprises displaying a live video feed of the facial region of the user.”; para 37: “The command may also cause the client device to display application of the virtual cosmetic effect to the at least one digital image of the facial region of the user at the zoom level obtained from the makeup professional according to the enable input”).
Regarding claim 17 (dependent on claim 13), all claim limitations are met and rendered obvious by Yuan in view of Su, Reas, Huang, and Yang because the method steps of claim 7 are the same as claim 17.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yuan in view of Su, Reas, Huang, ‘164, and Yang.
Regarding claim 8 (dependent on claim 4), Yuan in view of Su, Reas, Huang, and ‘164 teaches wherein the first beauty makeup special effect is a human image beauty makeup special effect (Yuan, pg. 6, para 13: “add special effects to the face image to be processed”), but fails to teach wherein the method further comprises: adjusting a position where the first beauty makeup special effect is added in a corresponding human image part in response to a position adjustment operation for the first beauty makeup special effect.
However, Yang teaches a similar method (Yang, abstract: “The makeup application device generates a command based on the user input from the makeup professional for applying a virtual cosmetic effect”) comprising: adjusting a position where the first beauty makeup special effect is added in a corresponding human image part in response to a position adjustment operation for the first beauty makeup special effect (Yang, user command, para 36: “the makeup application device 102 transmits the command to the client device whenever user input is obtained from the makeup professional, where the command comprises locations of feature points where the virtual cosmetic effects are applied to the 3D facial model. The client device 122 (FIG. 1) maps the feature points where the virtual cosmetic effects are applied to the 3D facial model to feature points on the facial region of the user. The client device 122 then applies the same virtual cosmetic effects to the facial region of the user based on the mapped feature points”; see also para 37 where a zoom level is adjusted).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the makeup position adjustment operation, as taught by Yang, with the method of Yuan in view of Su, Reas, Huang, and ‘164 in order to realistically visualize the makeup applied to a face based on user input (Yang, para 35: “the command causing a virtual cosmetic effect to be applied to the at least one digital image of the facial region of the user and display the at least one digital image. For some embodiments, displaying the at least one digital image comprises displaying a live video feed of the facial region of the user.”; para 37: “The command may also cause the client device to display application of the virtual cosmetic effect to the at least one digital image of the facial region of the user at the zoom level obtained from the makeup professional according to the enable input”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Sun et al. (CN 112686820 A) teaches a similar color adjustment coefficient for producing a beauty effect (pg. 4: “where the step of performing first correction processing on each pixel point in the face image according to the correction coefficient to obtain a corrected color value of the pixel point on the color channel includes: and for each pixel point in the face image, multiplying the color value of each color channel corresponding to each pixel point in the face image by the correction coefficient corresponding to the color channel respectively to obtain the correction color value of the pixel point on the color channel.”).
Simon et al. (cited in Non-Final: U.S. Patent No. 7,082,211 B2) teaches an image retouching method wherein facial key points can be moved to adjust the makeup effect (See col 6 text and Fig. 2A).
Nguyen et al. (cited in Non-Final: U.S. Patent No. 2015/0145882 A1) teaches adjusting pixel color and facial key points to apply makeup effects (See abstract and Fig. 8).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMMA E DRYDEN/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677