DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 20th 2026 has been entered.
Response to Amendment
Applicant’s amendment filed January 20th 2026 has been entered and made of record. Claims 1, 14 and 19 are amended. Claims 1-20 are pending.
Applicant’s remarks in view of the newly presented amendments have been considered but are not found to be persuasive for at least the following reasons:
101 Rejection
Applicant has amended independent claim 14 to include the newly added limitations of:
annotating, by a team, each image of the set to define a plurality of facial landmark annotations; and
annotating, by a team, each image of the set with a binary annotation for at least two facial pose attributes.
This amendment does not overcome the 101 abstract idea rejection. Performing the annotation by a team does not change that fact that it is a manual/mental operation which is merely organizing human activity. Performing the annotation by a team reinforces that fact that the act of annotating images is performed mentally/manually and is therefore not patent eligible. Manually annotating a group of images is not a patentable method. Allowing human beings to annotate images mentally/manually, regardless of number of people or expertise, is not a patentable concept. The rejection is accordingly maintained.
102 Rejection in view of Alzamzmi
Applicant has amended the claims to include the language:
wherein each image of the set of training images includes a plurality of facial landmark annotations and a binary annotation for at least two facial pose attributes;
Alzamzmi is primarily concerned with the binary annotation of either pain or no pain facial expression. A secondary reference, USPN 2020/0129380 to Sazonov is cited to teach multiple binary annotations for facial pose attributes (see paragraph [0051]).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 14-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of identifying infant faces in images without significantly more.
Claim 14 recites:
A method of producing a set of training images for training a convolutional neural network (CNN) model to identify a face of an infant in a test image, the method comprising the steps of:
providing a set of facial images of a plurality of different human infants; and
annotating, by a team, each image of the set to define a plurality of facial landmarks; and
annotating, by a team, each image of the set with a binary annotation for at least two facial pose attributes.
Claim 14 recites a process including the steps of providing images, annotating images of facial landmarks, and annotating images with a facial pose. It is unclear how this results in trained neural network. There is no indication in the claim that these steps are performed by a computer or processor, and could therefore be performed mentally and/or manually. Indeed as the claim is written, it merely reads on the process of collecting and labelling images that could be performed mentally and/or manually. There is no indication of the CNN actually being trained in the method. Images are merely gathered and annotated which can of course be performed mentally and/or manually.
This judicial exception is not integrated into a practical application because there is no explanation of training the neural network. The steps of collecting and labeling images is merely gathering image data with no recited practical application. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claim recites no computer or processor at all.
The steps are not directed to a practical application or an improvement because the end result is merely an annotated set of images with no recitation of how or why the images are annotated and no recitation of how the annotated images are used to train a CNN. Gathering and labeling images is a method that can be performed manually/mentally and is therefore not patent eligible.
Performing the annotations by a team does not change that fact that it is a manual/mental operation which is merely organizing human activity. Performing the annotation by a team reinforces that fact that the act of annotating images is performed mentally/manually and is therefore not patent eligible. Manually annotating a group of images is not a patentable method. Allowing human beings to annotate images mentally/manually, regardless of number of people or expertise, is not a patentable concept. The rejection is accordingly maintained.
Claim 15 is also rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, such as an abstract idea without significantly more.
Claim 15 recites facial landmark details that do not amount to significantly more than the abstract idea.
Claim 16 is also rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, such as an abstract idea without significantly more.
Claim 16 recites a known facial landmark dataset that does not amount to significantly more than the abstract idea.
Claim 17 is also rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, such as an abstract idea without significantly more.
Claim 17 recites a kind of annotation of the image that does not amount to significantly more than the abstract idea.
Claim 18 is also rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, such as an abstract idea without significantly more.
Claim 18 merely recites a kind of annotation of the image that does not amount to significantly more than the abstract idea of collecting annotated images.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-8, 10-12, 14 and 17-20 s/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0030354 to Alzamzmi et al. and 2020/0129380 to Sazonov et al.
With regard to claim 1, Alzamzmi discloses a method for identifying a face of an infant in an image in a computer, the computer comprising:
at least one processor (paragraphs [0016] and [0058], see processor); and
at least one memory (paragraph [0055], a memory is used to store the program for performing the method) including instructions for executing the method by the at least one processor;
the method comprising:
storing in the at least one memory a convolutional neural network (CNN) model trained with a set of training images and programmed for identifying a face of an infant in a test image suspected of including an infant's face, (paragraphs [0012]-[0013] and [0029], A CNN is disclosed that has been trained with a training set of images of infant faces),
wherein each image of the set of training images includes a plurality of facial landmark annotations and a binary annotation for at least [two] facial pose attributes (paragraphs [0036] and claims 4, 7, 14 and 17, facial landmarks are identified and annotated accordingly. ZFace face tracker is used and identifies 49 facial landmark points. These are considered facial landmark annotations. See also paragraph [0025], the Facial Action Coding System or FACS is used to identify and track facial expressions. FACS is also considered to identify and annotate facial landmarks. Applicant’s specification indicates that a pose annotation refers to if the infant’s face is excessively expressive. Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012]-[0013] and [0024]-[0027]. The facial pose attribute annotation is considered the determination of a pain or no-pain facial expression for the infant);
receiving a first image suspected of including an infant's face (paragraphs [0012]-[0013], [0030], and [0036], and Fig. 3 and paragraphs [0051]-[0052], Infants are imaged and face detection performed to identify the location of the face and facial landmarks); and
processing the first image using the CNN model, whereby the infant's face is identified in the first image based on an estimation of facial landmarks in the first image by the CNN model (paragraphs [0012]-[0013], [0030], and [0036], and Fig. 3 and paragraphs [0051]-[0052], Infants are imaged and face detection performed to identify the location of the face and facial landmarks).
Applicant has amended the claims to include the language:
wherein each image of the set of training images includes a plurality of facial landmark annotations and a binary annotation for at least two facial pose attributes;
Alzamzmi is primarily concerned with the binary annotation of either pain or no pain facial expression. A secondary reference, USPN 2020/0129380 to Sazonov is cited to teach multiple binary annotations for facial pose attributes (see paragraph [0051]). Sazonov is directed to monitoring an infant’s facial expression and teaches that face/label pairs are further used to train a SVM classifiers operating on features extracted by a pre-trained CNN (paragraph [0051]). Sazonov also teaches that several infant facial classifiers are used to annotate facial pose attributes such as “clam and awake”, “crying, “eyes closed”, “turning away”, etc. (paragraph [0051]). These are each interpreted as binary classifier annotation of a condition being either present or not present for an infant’s face.
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to annotate additional infant facial pose attributes as taught by Sazonov in combination with the facial expression taught by Alzamzmi in order to better document and analyze the infant facial images.
With regard to claim 4, Alzamzmi discloses the method of claim 1, wherein the at least one facial pose attribute annotation includes a binary annotation indicating at least one of whether infant's face is turned, tilted, occluded, or excessively expressive (paragraphs [0036] and claims 4, 7, 14 and 17, facial landmarks are identified and annotated accordingly. Applicant’s specification indicates that a pose annotation refers to if the infant’s face is excessively expressive. Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012] and [0024]-[0027], and the indication is one of either pain or no pain detected). See also Sazonov paragraph [0051] for infant facial annotations including “clam and awake”, “crying, “eyes closed”, “turning away”, etc.
With regard to claim 5, Alzamzmi discloses the method of claim 1, wherein processing the first image further comprises identifying, as one or more identified facial landmarks, one or more of the plurality of facial landmark annotations of the identified infant's face in the test image (paragraph [0036], ZFace tracker is used to detect the face and obtain 49 facial landmarks. These are considered facial landmark annotations. See also paragraph [0025], the Facial Action Coding System or FACS is used to identify and track facial expressions. FACS is also considered to identify and annotate facial landmarks.).
With regard to claim 6, Alzamzmi discloses the method of claim 5, wherein a series of test images are processed, the series of test images obtained from a video recording of an infant (paragraph [0013] and [0030]-[0031], The infant’s face is monitored with video imaging).
With regard to claim 7, Alzamzmi discloses the method of claim 6, wherein the one or more identified facial landmarks are located in each image of the series of test images, and wherein the one or more identified facial landmarks are tracked from image to image (paragraph [0036], the infant face is tracked across multiple video frames. ZFace tracker is used to detect the face and obtain 49 facial landmarks. These are considered facial landmark annotations. See also paragraph [0025], the Facial Action Coding System or FACS is used to identify and track facial expressions. FACS is also considered to identify and annotate facial landmarks).
With regard to claim 8, Alzamzmi discloses the method of claim 1, wherein the method is used in at least one of a method of identifying a behavior, identifying a developmental stage, diagnosing a developmental abnormality, or diagnosing a medical condition of an infant depicted in the test image (paragraphs [0012], [0024]-[0027] [0035]-[0036] Alzamzmi discloses identifying pain expression in the infant’s facial expression as well as crying and movement to monitor the infant for pain/medical condition and/or behavior).
With regard to claim 10, Alzamzmi discloses the method of claim 1, wherein the method is used to identify an individual infant depicted in the test image (paragraph [0013], The method identifies an infant of interest to determine if they are in pain).
With regard to claim 11, Alzamzmi discloses the method of claim 1, further comprising jointly training the CNN model by rotating training between the set of training images and a second set of training images (paragraphs [0013], [0015], [0028]-[0029], [0038] and [0042]-[0043], Several different training image sets are disclosed to be used in the system. The training images are also accordingly expanded by augmenting the existing training images to generate new training images or additional training datasets).
With regard to claim 12, Alzamzmi discloses the method of claim 1, further comprising jointly training the CNN model by rotating training between the CNN model for identifying a face of an infant and a second CNN model for identifying a face of an infant (paragraphs [0013], [0015], [0028]-[0029], [0038], [0042]-[0043] and [0048]-[0050], Several different training image sets are disclosed to be used in the system as well as several different CNN models. The training images are also accordingly expanded by augmenting the existing training images to generate new training images or additional training datasets).
With regard to claim 14, the discussion of claim 1 applies. Alzamzmi discloses a method of producing a set of training images for training a convolutional neural network (CNN) model to identify a face of an infant in a test image (paragraphs [0012]-[0013] and [0029], A CNN is disclosed that has been trained with a training set of images of infant faces), the method comprising:
providing a set of facial images of a plurality of different human infants (paragraphs [0012]-[0013] and [0029], A CNN is disclosed that has been trained with a training set of images of infant faces);
annotating, by a team, each image of the set to define a plurality of facial landmark annotations (paragraphs [0013, [0015]-[0016], [0027], [0035], [0043], [0045], and [0047]-[0052], Alzamzmi teaches a (Neonatal Pain Assessment Database) or NPAD and (Classification of Pain Expression) COPE database. Both of these database are examples of images collected and annotated over time by a “team” presumable of more than one person. Theya re standard databases that are designed and used for the purpose of training a neural network for classifying infant facial images. See also paragraphs [0036] and [0025] and claims 4, 7, 14 and 17, facial landmarks are identified and annotated accordingly. ZFace tracks 49 facial landmarks Neonatal FACS is also introduced. Facial Action Coding System is a standard facial expression recognition system that operates using facial landmarks and their changes for determining an intensity degree according to the different annotated landmarks of the imaged face); and
annotating, by a team, each image of the set with a binary annotation for at least two one facial pose attributes (See discussion regarding NPAD and COPE. Applicant’s specification indicates that a pose annotation refers to if the infant’s face is excessively expressive. Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012] and [0024]-[0027]. Neonatal FACS is also introduced. Facial Action Coding System is a standard facial expression recognition system that operates using facial landmarks and their changes for determining an intensity degree according to the different annotated landmarks of the imaged face. Each recognized facial expression and intensity degree is also interpreted as a facial pose attribute. The facial pose attribute annotation is considered the determination of a pain or no-pain facial expression for the infant).
Applicant has amended the claims to include the language:
annotating, by a team, each image of the set with a binary annotation for at least two one facial pose attributes;
Alzamzmi is primarily concerned with the binary annotation of either pain or no pain facial expression. A secondary reference, USPN 2020/0129380 to Sazonov is cited to teach multiple binary annotations for facial pose attributes (see paragraph [0051]). Sazonov is directed to monitoring an infant’s facial expression and teaches that face/label pairs are further used to train a SVM classifiers operating on features extracted by a pre-trained CNN (paragraph [0051]). Sazonov also teaches that several infant facial classifiers are used to annotate facial pose attributes such as “clam and awake”, “crying, “eyes closed”, “turning away”, etc. (paragraph [0051]). These are each interpreted as binary classifier annotation of a condition being either present or not present for an infant’s face.
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to annotate additional infant facial pose attributes as taught by Sazonov in combination with the facial expression taught by Alzamzmi in order to better document and analyze the infant facial images.
With regard to claim 17, Alzamzmi discloses the method of claim 14, wherein the at least one pose annotation includes a binary annotation indicating at least one of whether infant's face is turned, tilted, occluded, or excessively expressive (Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012] and [0024]-[0027]. Neonatal FACS is also introduced. Facial Action Coding System is a standard facial expression recognition system that operates using facial landmarks and their changes for determining an intensity degree according to the different annotated landmarks of the imaged face. Each recognized facial expression and intensity degree is also interpreted as a facial pose attribute.
With regard to claim 18, Alzamzmi discloses the method of claim 17, wherein the step of annotating each image of the set with at least one facial pose attribute further comprises at least one of:
applying a binary annotation indicating the infant's face is turned if at least one of the eyes, nose, and mouth are not clearly visible;
applying a binary annotation indicating the infant's face is tilted if the head axis, projected on the image plane, is 45o or more beyond upright;
applying a binary annotation indicating the infant's face is occluded if landmarks are covered by body parts or objects; or
applying a binary annotation indicating the infant's face is excessively expressive if the facial muscles are tense due to an exaggerated facial expression (Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012]-[0017], [0024]-[0027], [0034]-[0035] and [0052]. The binary annotation is interpreted as either a pain or no pain facial expression being identified. Neonatal FACS is also introduced. Facial Action Coding System is a standard facial expression recognition system that operates using facial landmarks and their changes for determining an intensity degree according to the different annotated landmarks of the imaged face. Each recognized facial expression and intensity degree is also interpreted as a facial pose attribute).
With regard to claim 19, Alzamzmi discloses a system for identifying a face of an infant in an image, the system comprising a computer comprising:
at least one processor (Fig. 3, processors 310 and 315); and
at least one memory (Fig. 3, memory 320) including instructions that, when executed by the at least one processor, cause the computer to:
receive a test image suspected of including an infant's face (paragraphs [0012]-[0013], [0030], and [0036], and Fig. 3 and paragraphs [0051]-[0052], Infants are imaged and face detection performed to identify the location of the face and facial landmarks); and
identify, using a convolutional neural network (CNN) model, a face of an infant in the test image, wherein the CNN model is trained with a set of training images and programmed for identifying infant faces in images and (paragraphs [0012]-[0013] and [0029], A CNN is disclosed that has been trained with a training set of images of infant faces),wherein the set of training images is obtained by causing the computer to:
receive a set of facial images of a plurality of different human infants (paragraphs [0012]-[0013] and [0029], A CNN is disclosed that has been trained with a training set of images of infant faces);
annotate each image of the set of facial images to define a plurality of facial landmark annotations (paragraphs [0036] and claims 4, 7, 14 and 17, facial landmarks are identified and annotated accordingly. ZFace face tracker is used and identifies 49 facial landmark points. These are considered facial landmark annotations. See also paragraph [0025], the Facial Action Coding System or FACS is used to identify and track facial expressions. FACS is also considered to identify and annotate facial landmarks. Applicant’s specification indicates that a pose annotation refers to if the infant’s face is excessively expressive. Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012]-[0013] and [0024]-[0027]. The facial pose attribute annotation is considered the determination of a pain or no-pain facial expression for the infant); and
annotate each image of the set of facial images with a binary annotation for at least [two] facial pose attributes (Alzamzmi discloses identifying pain expression in the infant’s facial expression at paragraphs [0012]-[0013] and [0024]-[0027]. The facial pose attribute annotation is considered the determination of a pain or no-pain facial expression for the infant.
Applicant has amended the claims to include the language:
annotate each image of the set of facial images with a binary annotation for at least two facial pose attributes;
Alzamzmi is primarily concerned with the binary annotation of either pain or no pain facial expression. A secondary reference, USPN 2020/0129380 to Sazonov is cited to teach multiple binary annotations for facial pose attributes (see paragraph [0051]). Sazonov is directed to monitoring an infant’s facial expression and teaches that face/label pairs are further used to train a SVM classifiers operating on features extracted by a pre-trained CNN (paragraph [0051]). Sazonov also teaches that several infant facial classifiers are used to annotate facial pose attributes such as “clam and awake”, “crying, “eyes closed”, “turning away”, etc. (paragraph [0051]). These are each interpreted as binary classifier annotation of a condition being either present or not present for an infant’s face.
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to annotate additional infant facial pose attributes as taught by Sazonov in combination with the facial expression taught by Alzamzmi in order to better document and analyze the infant facial images.
With regard to claim 20, Alzamzmi discloses the system of claim 19, further comprising an imaging system in electronic communication with the computer, the imaging system configured to capture a series of test images of an infant and to provide the series of test images to the computer as the set of facial images (Fig. 3, camera 330 and image data interface 325, and paragraphs [0012]-[0013] and [0029], A CNN is disclosed that has been trained with a training set of images of infant faces).
Claims 2 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0030354 to Alzamzmi et al, 2020/0129380 to Sazonov et al., 2019/0294861 to Quinteros and 2019/0200872 to Matsuoka et al.
With regard to claims 2 and 15, Alzamzmi and Sazonov disclose the method of claim 1, but do not disclose wherein the plurality of facial landmark annotations include at least interocular distance and minimal containment box.
Both interocular distance and bounding boxes are fairly common in facial image processing.
Quinteros discloses a system for determining if a face belongs to an adult or a child by measuring an interocular distance (paragraph [0020] and Figs. 4A and 4B). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use an interocular distance when evaluating a face as taught by Quinteros in the facial landmark identification of Alzamzmi in order to determine information about the face.
Matsuoka discloses a system for monitoring an infant similar to the infant monitoring of Alzamzmi and further teaches the use a of a bounding box to identify the most relevant portion of the infant (paragraphs [0131] and Fig. 9). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the bounding box determination taught by Matsuoka in order to identify and monitor the most relevant portion of the infant imaging of Alzamzmi.
Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0030354 to Alzamzmi et al., 2020/0129380 to Sazonov et al., and 2022/0358645 to Modayur.
With regard to claims 3 and 16, Alzamzmi and Sazonov disclose the method of claim 1, and Alzamzmi discloses identifying a plurality of landmarks (paragraphs [0036] and claims 4, 7, 14 and 17, facial landmarks are identified and annotated accordingly).
Alzamzmi does not explicitly disclose wherein the plurality of facial landmark annotations adhere to a Multi-PIE layout.
The CMU Multi-PIE database is a well known standard that dates back to 2008.
Modayur teaches an infant monitoring system similar to Alzamzmi and further teaches identifying facial feature landmarks using Multi-PIE as well as a number of other feature recognition systems (paragraphs [0019] and [0046]).
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the well known Multi-PIE layout as disclosed by Modayur in the facial landmark recognition of Alzamzmi.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0030354 to Alzamzmi et al., 2020/0129380 to Sazonov et al., and 2023/0309915 to Salekin et al.
With regard to claim Alzamzmi and Sazonov disclose the method of claim 8, but does not explicitly disclose wherein the behavior is non-nutritive sucking behavior.
Salekin discloses a similar system to Alzamzmi, and shares common inventors from the same assignee, and further teaches identifying behavior is non-nutritive sucking behavior such as sucking on a pacifier by identifying the infant’s facial motion of sucking as expression (paragraph [0074]).
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the sucking identification of Salekin in combination with the infant facial expression monitoring of Alzamzmi in order to detect infant facial activity.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2021/0030354 to Alzamzmi et al., 2020/0129380 to Sazonov et al., and 2023/0114980 to Gupta.
With regard to claim, Alzamzmi and Sazonov disclose the method of claim 1, but does not explicitly disclose wherein the CNN model includes at least one of HRNet, HRNetV2-W18, HRNet-R90JT, HRNet-R90FT, HRNet-R150GJT, 3FabRec, RetinaFace, or combinations thereof.
Gupta discloses the use of HRNet and other alternative landmark detectors are used in performing facial recognition (paragraphs [0056], [0058] and [0083]).
Therefore it would have been obvious to one of ordinary skill in the art before time o fifing to use the HRNet and/or other landmark detectors as taught by Gupta in the facial landmark recognition of Alzamzmi.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESLEY J TUCKER/Primary Examiner, Art Unit 2661