DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The office action is in response to Applicant’s amendment filed 02/18/2026 which has been entered and made of record. Claims 1, 6, 11, 16 and 21 have been amended. No claim has been newly added. Claims 1-21 are pending in the application.
Response to Arguments
Applicant’s arguments, filed 02/18/2026, with respect to the rejection(s) under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yamanashi, Lotti and Liu as fully explained below.
Applicant argues Yamanashi and Lotti, taken individually or in combination, do not teach the newly amended independent claims.
Examiner agrees Yamanashi and Lotti do not teach the newly amended independent claims. However, a new ground of rejection is made in view of Yamanashi, Lotti and Liu as fully explained below.
Conclusions: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant's amendments to the claims. Therefore, the present Office Action is made final.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamanashi et al. (US 20150254501 A1), hereinafter as Yamanashi, in view of Lotti et al. (US 20250322601 A1), hereinafter as Lotti, and further in view of Liu et al. (US 20220292773 A1), hereinafter as Liu.
Regarding claim 1, Yamanashi teaches One or more non-transitory computer readable media storing instructions that, when executed via one or more processors of one or more computers, cause the one or more computers to (Yamanashi paragraph [0175] “A non-transitory computer-readable recording medium of the present disclosure stores a program for causing a device including a processor to perform pieces of processing of: generating a simulation image by superimposing a makeup image on an image obtained by photographing a face”): obtain data indicating …… representing a plurality of facial features corresponding to a face of a user (Yamanashi paragraph [0127] “feature point acquisition unit 230 analyzes the photographed image to extract the feature points of the face (or facial component) from the photographed image. Feature point acquisition unit 230 generates the facial feature point information from the acquired facial feature points.”); cause a virtual representation of …… to be presented to the user via one or more user interfaces at one or more computing devices accessible to the user (Yamanashi paragraph [0071] “makeup presentation unit 270 adjusts the positions of the makeup reference points of the makeup image to the positions of the set facial reference points to fix the region of the makeup image relative to the facial image. Makeup presentation unit 270 then outputs the generated simulation image to display unit 280.” And paragraph [0140] “As illustrated in FIG. 17, simulation image 540 is the image in which makeup images 541 to 545, such as the eyebrow-paint, eye shadow, eyeliner, cheek makeup, and lipstick images, are superimposed on facial image 511.”); ……
Yamanashi fails to teach a three-dimensional face mesh …… the three-dimensional face mesh …… receive, via the one or more user interfaces, user feedback regarding an accuracy of the three-dimensional face mesh, the user feedback indicating a desired modification of the three-dimensional face mesh; and cause the three-dimensional face mesh of the user to be adapted based upon the received user feedback. Lotti teaches a three-dimensional face mesh …… the three-dimensional face mesh …… (Lotti teaches generating 3D face mesh model with landmark points, Yamanashi teaches the facial reference points, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the 3D face mesh of Lotti with the method of Yamanashi, Lotti paragraph [0148] “At operation 402, processing logic generates, using the first 2D image data, a first three-dimensional (3D) model of the human face of the subject. In some embodiments, the first 3D model comprises a geometric model. In some embodiments, generating the first 3D model of the human face of the subject comprises identifying a landmark corresponding to a facial feature of the human face of the subject.”. And paragraph [0373] “3D model 1200 may be a mesh model, a point cloud model, or similar model comprising multiple objects such as vertices, lines, and faces to represent the subject's face. Landmarks 1202A-N may correspond to one or more vertices, one or more lines, one or more faces, or sets thereof.”).
Yamanashi and Lotti are in the same field of endeavor, namely computer graphics. Lotti teaches generating a 3D face mesh with landmark points based on 2D image to improve accuracy for beauty product application (Lotti paragraph [0045] “The 3D model can have high dimensional accuracy”). Yamanashi teaches generating facial reference points from 2D image. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lotti with the method of Yamanashi to improve accuracy for beauty product application.
Yamanashi in view of Lotti fail to teach receive, via the one or more user interfaces, user feedback regarding an accuracy of the three-dimensional face mesh, the user feedback indicating a desired modification of the three-dimensional face mesh; and cause the three-dimensional face mesh of the user to be adapted based upon the received user feedback. Liu teaches receive, via the one or more user interfaces, user feedback regarding an accuracy of the three-dimensional face mesh, the user feedback indicating a desired modification of the three-dimensional face mesh (Liu teaches the user-provided keypoint annotations as the user feedback, the user-provided keypoints are used to improve the accuracy of 3D model, further teaches reducing the difference of keypoints as the desired modification below, paragraph [0127] “The process additionally includes a step 1030 of identifying the second set of keypoints in the 2D facial image based on the user-provided keypoint annotations”, paragraph [0143] “The method and system disclosed herein can generate accurate 3D facial model (i.e., position map) based on 2D keypoints annotation for 3D ground-truth generation.” And paragraph [0275] “The user interface 3403 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touchpad, a touchscreen, or the like.”); and cause the three-dimensional face mesh of the user to be adapted based upon the received user feedback (Liu paragraph [0213-0215] “The process additionally includes a step 2830 of mapping the first set of keypoints to the second set of keypoints based on the set of user-provided keypoint annotations located on a plurality of vertices of a mesh of a 3D head template model. The process additionally includes a step 2840 of performing deformation to the mesh of the 3D head template model to obtain a deformed 3D head mesh model by reducing the differences between the first set of keypoints and the second set of keypoints. The process additionally includes a step 2850 of applying a blendshape method to the deformed 3D head mesh model to obtain a personalized head model according to the 2D facial image.”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Liu teaches using user provided facial keypoints annotation to ensure the accuracy of 3D face model (Liu paragraph [0012] “To ensure the accuracy of face reconstruction and the desirable keypoint detection, in some embodiments, 2D facial keypoints annotation is used to generate the ground-truth of a 3D face model without using an expensive face capturing system. The approach disclosed herein generates the 3D ground-truth face model which preserves the detailed facial features of an input image, overcomes the shortcomings of the existing facial models”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi and Lotti to improve accuracy for beauty product application.
Regarding claim 2, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, wherein the instructions, when executed via the one or more processors, and further teach further cause the one or more computers to cause the virtual representation of the three-dimensional face mesh data to be updated at the one or more user interfaces upon the adapting of the three-dimensional face mesh based upon the user feedback (Liu teaches updating 3D head mesh based on user feedback, Yamanashi teaches updating the corrected simulation image in Figure 18, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi. Liu paragraph [0021] “receiving a two-dimensional (2D) facial image; identifying a first set of keypoints in the 2D facial image based on artificial intelligence (AI) models; mapping the first set of keypoints to a second set of keypoints based on a set of user-provided keypoint annotations located on a plurality of vertices of a mesh of a 3D head template model; performing deformation to the mesh of the 3D head template model to obtain a deformed 3D head mesh model by reducing the differences between the first set of keypoints and the second set of keypoints”, Yamanashi paragraph [0036] “FIG. 18 is a view illustrating an example of a corrected simulation image generated according to the second exemplary embodiment”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Liu teaches using user provided facial keypoints annotation to ensure the accuracy of 3D face model (Liu paragraph [0012] “To ensure the accuracy of face reconstruction and the desirable keypoint detection, in some embodiments, 2D facial keypoints annotation is used to generate the ground-truth of a 3D face model without using an expensive face capturing system. The approach disclosed herein generates the 3D ground-truth face model which preserves the detailed facial features of an input image, overcomes the shortcomings of the existing facial models”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi and Lotti to improve accuracy for beauty product application.
Regarding claim 3, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, and further teach wherein the virtual representation of the three-dimensional face mesh comprises one or more augmented reality elements superimposed over real-time image data depicting the face of the user (Yamanashi teaches superimposing makeup images of cosmetic product as the augmented reality elements, onto the video images of the user, paragraph [0062] “An example of photographing unit 210 includes a digital video camera, and photographing unit 210 photographs a video image of the face becoming a makeup simulation target. Photographing unit 210 then outputs the photographed video image to image acquisition unit 220. The video image includes a plurality of time-series images (frame images). In the second exemplary embodiment, it is assumed that the face becoming the makeup simulation target is a face of a user using makeup supporting device 100.” And paragraph [0140] “As illustrated in FIG. 17, simulation image 540 is the image in which makeup images 541 to 545, such as the eyebrow-paint, eye shadow, eyeliner, cheek makeup, and lipstick images, are superimposed on facial image 511.”).
Regarding claim 4, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, and further teach wherein the user feedback indicates an inaccurate positioning of one or more facial points included in the three-dimensional face mesh (Liu teaches the user-provided second set keypoints as the user feedback, they are used to correct the first set of keypoints, paragraph [0021] “identifying a first set of keypoints in the 2D facial image based on artificial intelligence (AI) models; mapping the first set of keypoints to a second set of keypoints based on a set of user-provided keypoint annotations located on a plurality of vertices of a mesh of a 3D head template model; performing deformation to the mesh of the 3D head template model to obtain a deformed 3D head mesh model by reducing the differences between the first set of keypoints and the second set of keypoints”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Liu teaches using user provided facial keypoints annotation to ensure the accuracy of 3D face model (Liu paragraph [0012] “To ensure the accuracy of face reconstruction and the desirable keypoint detection, in some embodiments, 2D facial keypoints annotation is used to generate the ground-truth of a 3D face model without using an expensive face capturing system. The approach disclosed herein generates the 3D ground-truth face model which preserves the detailed facial features of an input image, overcomes the shortcomings of the existing facial models”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi and Lotti to improve accuracy for beauty product application.
Regarding claim 5, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, and further teach wherein the user feedback indicates an inaccurate characterization of one or more of the plurality of facial features (Liu teaches the user-provided facial keypoints as the user feedback, those keypoints are used to represent facial features, this implies Liu’s user input are used to correct inaccurate facial features, paragraph [0013] “Apart from the facial keypoint detection, in some embodiments, multi-task learning and transfer learning solutions are implemented for facial feature classification tasks, so that more information can be extracted from an input face image, which is complementary to the keypoints information. The detected facial keypoints with the predicted facial features together are valuable to computers or mobile games for creating the face avatar of the players.”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Liu teaches using user provided facial keypoints annotation to ensure the accuracy of 3D face model (Liu paragraph [0012] “To ensure the accuracy of face reconstruction and the desirable keypoint detection, in some embodiments, 2D facial keypoints annotation is used to generate the ground-truth of a 3D face model without using an expensive face capturing system. The approach disclosed herein generates the 3D ground-truth face model which preserves the detailed facial features of an input image, overcomes the shortcomings of the existing facial models”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi and Lotti to improve accuracy for beauty product application.
Regarding claim 6, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, and further teach wherein, subsequent to the adaption of the three-dimensional face mesh based on the user feedback (Liu teaches the adaption of 3D face model based on user input, Lotti teaches using 2D image data to generate 3D face model, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the user input teaching of Liu with the 3D face model of Lotti, Lotti paragraph [0148] “At operation 402, processing logic generates, using the first 2D image data, a first three-dimensional (3D) model of the human face of the subject.”); the one or more user interfaces present a virtual application of one or more cosmetic products to the face of the user as represented by the adapted three-dimensional face mesh (Lotti paragraph [0153] “At operation 407, processing logic provides, for presentation at the client device, an indication of the level of correspondence along with the modified 3D model as an overlay on a subsequent 2D representation of the human face having the beauty product applied at the area.”), and the one or more user interfaces receive additional user feedback indicative of user satisfaction with the virtual application of the one or more cosmetic products (Lotti teaches additional user feedback for satisfaction, paragraph [0353] “a portion of the processes of the evaluation module 1050 can be performed by a human reviewer. In some embodiments, the evaluation metric 1051 can include or reflect a human-derived metric. For example, one or more human evaluators can determine whether a particular training output matches a respective ground truth. For example, a human reviewer can indicate whether one or more of the training outputs 1040 satisfies a beauty threshold corresponding to a particular beauty target.”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Lotti teaches generating a 3D face mesh with landmark points based on 2D image to improve accuracy for beauty product application (Lotti paragraph [0045] “The 3D model can have high dimensional accuracy”). Yamanashi teaches generating facial reference points from 2D image. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lotti with the method of Yamanashi and Liu to improve accuracy for beauty product application.
Regarding claim 7, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, wherein the instructions to obtain the data indicating the three-dimensional face mesh comprise instructions to: and further teach obtain image data representing the face of the user (Lotti paragraph [0147] “At operation 401, processing logic implementing the method 400 receives first two-dimensional (2D) image data corresponding to a first 2D image of a human face of a subject.”); and generate the three-dimensional face mesh based upon the obtained image data (Lotti paragraph [0148] “At operation 402, processing logic generates, using the first 2D image data, a first three-dimensional (3D) model of the human face of the subject. In some embodiments, the first 3D model comprises a geometric model. In some embodiments, generating the first 3D model of the human face of the subject comprises identifying a landmark corresponding to a facial feature of the human face of the subject.”. And paragraph [0373] “3D model 1200 may be a mesh model, a point cloud model, or similar model comprising multiple objects such as vertices, lines, and faces to represent the subject's face. Landmarks 1202A-N may correspond to one or more vertices, one or more lines, one or more faces, or sets thereof.”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Lotti teaches generating a 3D face mesh with landmark points based on 2D image to improve accuracy for beauty product application (Lotti paragraph [0045] “The 3D model can have high dimensional accuracy”). Yamanashi teaches generating facial reference points from 2D image. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lotti with the method of Yamanashi and Liu to improve accuracy for beauty product application.
Regarding claim 8, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, wherein the instructions, when executed via the one or more processors, and further teach further cause the one or more computers to cause an indication of user feedback directions to be presented via the one or more user interfaces at the one or more computing devices accessible to the user (Yamanashi teaches a makeup correction user interface, Liu teaches the user feedback of keypoints annotation through a user interface, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi, paragraph [0105] “Makeup correction unit 290 receives the selection of the makeup, the selection of the correction type, and the fixing of the correction level from the user through the touch panel display.”), the user feedback directions guiding the user through providing the user feedback to adapt the three-dimensional face mesh (Yamanashi teaches the user feedback direction, Liu teaches the user feedback of keypoints annotation through a user interface, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the keypoints annotation of Liu with the method of Yamanashi, paragraph [0105] “Specifically, makeup correction unit 290 first displays an option of the makeup and an option of the correction type. When the makeup and the correction type are selected, makeup correction unit 290 receives a drag manipulation with respect to the facial reference points. Herein, desirably makeup correction unit 290 displays the position of the facial reference points on the simulation image using a marker. When a predetermined manipulation indicating completion of the correction is performed, makeup correction unit 290 then generates correction information indicating a difference from an initial state of the facial reference points.”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Liu teaches using user provided facial keypoints annotation to ensure the accuracy of 3D face model (Liu paragraph [0012] “To ensure the accuracy of face reconstruction and the desirable keypoint detection, in some embodiments, 2D facial keypoints annotation is used to generate the ground-truth of a 3D face model without using an expensive face capturing system. The approach disclosed herein generates the 3D ground-truth face model which preserves the detailed facial features of an input image, overcomes the shortcomings of the existing facial models”). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu with the method of Yamanashi and Lotti to improve accuracy for beauty product application.
Regarding claim 9, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, wherein the instructions, when executed via the one or more processors, and further teach further cause the one or more computers to adapt or validate one or more machine learning algorithms for generating the three-dimensional face mesh based upon the received user feedback (Lotti teaches a generative machine learning model 170 to generate 3D geometric data 1040E as training output in Figure 10, the model parameter is altered by the evaluation module based on evaluation metrics, further teaches human reviewer performing evaluation module 1050, paragraph [0341] “the parameter modification data 1053 can be generated by evaluation module 1050 based on the evaluation metric 1051 and can be used as an input to generative machine learning model 170 and/or to alter one or more of the model parameters 1061. It can be noted that system 1000 can also be used in inference to, for example, generate new facial feature information.” And paragraph [0353] “a portion of the processes of the evaluation module 1050 can be performed by a human reviewer. In some embodiments, the evaluation metric 1051 can include or reflect a human-derived metric.”).
Yamanashi, Lotti and Liu are in the same field of endeavor, namely computer graphics. Lotti teaches generating a 3D face mesh with landmark points based on 2D image to improve accuracy for beauty product application (Lotti paragraph [0045] “The 3D model can have high dimensional accuracy”). Yamanashi teaches generating facial reference points from 2D image. Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Lotti with the method of Yamanashi and Liu to improve accuracy for beauty product application.
Regarding claim 10, Yamanashi in view of Lotti and Liu teach The one or more non-transitory computer readable media of claim 1, and further teach wherein at least one of the one or more computers is at least one of the one or more computing devices accessible to the user (Yamanashi paragraph [0163] “For example, makeup supporting device 100 may be such a distributed arrangement system that only photographing unit 210 and display unit 280 in the functional units illustrated in FIG. 2 may be provided in a terminal carried by the user, and other functional units are provided in a server on the network.”).
Regarding claim 11, it recites similar limitations of claim 1 but in a computer-implemented method form. The rationale of claim 1 rejection is applied to reject claim 11. In addition, Yamanashi teaches A computer-implemented method performed via one or more processors of one or more computers, the method comprising (Yamanashi paragraph [0175-0176] “A non-transitory computer-readable recording medium of the present disclosure stores a program for causing a device including a processor to perform pieces of processing of …… The present disclosure is usefully applied to a makeup supporting device, a makeup supporting system, a makeup supporting method, and a makeup supporting program for being able to simply apply corrected makeup created by a user to another user.”):
Regarding claim 12, claim 12 has similar limitations as claim 2, therefore it is rejected under the same rationale as claim 2.
Regarding claim 13, claim 13 has similar limitations as claim 3, therefore it is rejected under the same rationale as claim 3.
Regarding claim 14, claim 14 has similar limitations as claim 4, therefore it is rejected under the same rationale as claim 4.
Regarding claim 15, claim 15 has similar limitations as claim 5, therefore it is rejected under the same rationale as claim 5.
Regarding claim 16, claim 16 has similar limitations as claim 6, therefore it is rejected under the same rationale as claim 6.
Regarding claim 17, claim 17 has similar limitations as claim 7, therefore it is rejected under the same rationale as claim 7.
Regarding claim 18, claim 18 has similar limitations as claim 8, therefore it is rejected under the same rationale as claim 8.
Regarding claim 19, claim 19 has similar limitations as claim 9, therefore it is rejected under the same rationale as claim 9.
Regarding claim 20, claim 20 has similar limitations as claim 10, therefore it is rejected under the same rationale as claim 10.
Regarding claim 21, it recites similar limitations of claim 1 but in a computing system form. The rationale of claim 1 rejection is applied to reject claim 21. In addition, Yamanashi teaches A computing system comprising: one or more processors; and one or more computer readable media storing instructions that, when executed via the one or more processors, cause the computing system to (Yamanashi paragraph [0046] “makeup supporting device 100 includes a CPU (Central Processing Unit), a storage medium such as a ROM (Read Only Memory) in which a control program is stored, and a working memory such as a RAM (Random Access Memory). In this case, the CPU executes the control program to implement a function of each of the above units.”):
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMING WEI whose telephone number is (571)272-3831. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOMING WEI/Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611