DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is responsive to the following communication: Original claims filed 06/23/23. This action is made non-final.
3. Claims 1-14 are pending in the case. Claims 1, 12 and 14 are independent claims.
Claim Objections
4. Claims 3-7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 101
5. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, 12 and 14:
2A Prong 1:
the limitations of "generating pseudo labels for an evaluation dataset ..., wherein the evaluation dataset is an unlabeled dataset; and evaluating performance of the first model using the pseudo labels" reflect an abstract idea (mental process).
2A Prong 2:
the limitations of "obtaining a first model trained using a labeled dataset; obtaining a second model built by performing unsupervised domain adaptation on the first model" reflect additional elements of insignificant extra solution activity of mere data gathering and therefore do not integrate into a practical application. MPEP 2106.05(g).
the limitations "the method being performed by at least one computing device" and "using a second model" are additional elements of mere instructions to apply by generic computing devices and therefore do not integrate into a practical application or provide significantly more. MPEP 2106.05(f).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitations of "obtaining a first model trained using a labeled dataset; obtaining a second model built by performing unsupervised domain adaptation on the first model" can also be categorized as well understood, routine and conventional activity of “transmitting or receiving data over a network” and therefore does not provide significantly more. MPEP 2106.05(d)(ii).
the limitations "the method being performed by at least one computing device" and "using a second model" are additional elements of mere instructions to apply by generic computing devices and therefore do not integrate into a practical application or provide significantly more. MPEP 2106.05(f).
Regarding claim 2:
2A Prong 1:
the limitation of “wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 3:
2A Prong 1:
the limitation of “wherein the generating of the pseudo labels comprises: deriving adversarial noise for a data sample belonging to the evaluation dataset; generating a noisy sample by reflecting the derived adversarial noise in the data sample; and generating a pseudo label for the data sample based on a predicted label of the noisy sample obtained through the second mode” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the generating of the pseudo labels comprises: deriving adversarial noise for a data sample belonging to the evaluation dataset; generating a noisy sample by reflecting the derived adversarial noise in the data sample; and generating a pseudo label for the data sample based on a predicted label of the noisy sample obtained through the second mode” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 4:
2A Prong 1:
the limitation of “obtaining a first predicted label for the data sample through the second model; generating a noisy sample by reflecting a value of a noise parameter in the data sample; obtaining a second predicted label for the noisy sample through the second model;updating the value of the noise parameter in a direction to increase a difference between the first predicted label and the second predicted label; and calculating adversarial noise for the data sample based on the updated value of the noise parameter” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “obtaining a first predicted label for the data sample through the second model; generating a noisy sample by reflecting a value of a noise parameter in the data sample; obtaining a second predicted label for the noisy sample through the second model;updating the value of the noise parameter in a direction to increase a difference between the first predicted label and the second predicted label; and calculating adversarial noise for the data sample based on the updated value of the noise parameter” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 5:
2A Prong 1:
the limitation of “wherein in the updating of the value of the noise parameter, the value of the noise parameter is updated within a range that satisfies a preset size constraint condition” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein in the updating of the value of the noise parameter, the value of the noise parameter is updated within a range that satisfies a preset size constraint condition” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 6:
2A Prong 1:
the limitation of “wherein the difference between the first predicted label and the second predicted label is calculated based on Kullback-Leibler divergence” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the difference between the first predicted label and the second predicted label is calculated based on Kullback-Leibler divergence” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 7:
2A Prong 1:
the limitation of “wherein the noisy sample comprises a first noisy sample based on a first adversarial noise and a second noisy sample based on a second adversarial noise, wherein the first adversarial noise and the second adversarial noise are respectively derived from noise parameters having different initial values, and wherein the generating of the pseudo label for the data sample comprises generating the pseudo label for the data sample by aggregating a predicted label of the first noisy sample and a predicted label of the second noisy sample” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the noisy sample comprises a first noisy sample based on a first adversarial noise and a second noisy sample based on a second adversarial noise, wherein the first adversarial noise and the second adversarial noise are respectively derived from noise parameters having different initial values, and wherein the generating of the pseudo label for the data sample comprises generating the pseudo label for the data sample by aggregating a predicted label of the first noisy sample and a predicted label of the second noisy sample” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 8:
2A Prong 1:
the limitation of “wherein the evaluating of the performance of the first model comprises: predicting labels of the evaluation dataset through the first model; and evaluating the performance of the first model by comparing the pseudo labels and the predicted label” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of wherein the evaluating of the performance of the first model comprises: predicting labels of the evaluation dataset through the first model; and evaluating the performance of the first model by comparing the pseudo labels and the predicted label” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 9:
2A Prong 1:
the limitation of “wherein the labeled dataset is a dataset of a source domain, the evaluation dataset is a dataset of a target domain, and the method further comprising: obtaining a third model trained using a labeled dataset of the source domain; evaluating performance of the third model using the pseudo labels; and selecting a model to be applied to the target domain from among the first model and the third model based on results of evaluating the performance of the first model and evaluating the performance of the third model” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the labeled dataset is a dataset of a source domain, the evaluation dataset is a dataset of a target domain, and the method further comprising: obtaining a third model trained using a labeled dataset of the source domain; evaluating performance of the third model using the pseudo labels; and selecting a model to be applied to the target domain from among the first model and the third model based on results of evaluating the performance of the first model and evaluating the performance of the third model” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 10:
2A Prong 1:
the limitation of “wherein the labeled dataset is a dataset of a first source domain, the evaluation dataset is a dataset of a target domain, and the method further comprising: obtaining a third model trained using a labeled dataset of a second source domain; evaluating performance of the third model using the pseudo labels; and selecting a model to be applied to the target domain from among the first model and the third model based on results of evaluating the performance of the first model and evaluating the performance of the third model” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the labeled dataset is a dataset of a first source domain, the evaluation dataset is a dataset of a target domain, and the method further comprising: obtaining a third model trained using a labeled dataset of a second source domain; evaluating performance of the third model using the pseudo labels; and selecting a model to be applied to the target domain from among the first model and the third model based on results of evaluating the performance of the first model and evaluating the performance of the third model” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 11:
2A Prong 1:
the limitation of “wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset wherein the evaluation dataset is a more recently generated dataset than the labeled dataset, and the method further comprising determining that the first model needs to be updated in response to a determination that the evaluated performance does not satisfy a predetermined condition” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset wherein the evaluation dataset is a more recently generated dataset than the labeled dataset, and the method further comprising determining that the first model needs to be updated in response to a determination that the evaluated performance does not satisfy a predetermined condition” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Regarding claim 13:
2A Prong 1:
the limitation of “wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the limitation of “wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Claim Rejections - 35 USC § 102
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
7. Claims 1-2 and 12-14 are rejected under 35 U.S.C. 102(a)(1) as being rejected by anticipated by Wang (US 20240221346).
Regarding claim 1, Wang discloses a method for evaluating performance, the method being performed by at least one computing device and comprising:
obtaining a first model trained using a labeled dataset (training, [a preset model] based on each pedestrian image in the sample dataset and a class cluster label corresponding to each pedestrian image, see paragraph 0075-0078);
obtaining a second model built by performing unsupervised domain adaptation on the first model (each pedestrian image and its class cluster label or its pseudo label is used to train the second preset model, which can implement unsupervised training, thereby reducing the cost of annotating each pedestrian image, see paragraph 0082);
generating pseudo labels for an evaluation dataset using the second model, wherein the evaluation dataset is an unlabeled dataset (unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space, see paragraph 0091, see also label-free sample dataset in paragraph 0089); and
evaluating performance of the first model using the pseudo labels (unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space. Through the iterative training process in step 204, the second preset model converges, to obtain a first preset model 205, see paragraphs 0091-0092).
Regarding claim 2, Wang discloses wherein the unsupervised domain adaptation and the generating of the pseudo labels are performed without using the labeled dataset (Unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space, see paragraph 0091).
Regarding claim 12, Wang discloses a system for evaluating performance, the system comprising:
a memory configured to store one or more instructions; and one or more processors configured to execute the one or more stored instructions to perform (see FIG. 10):
obtaining a first model trained using a labeled dataset (training, [a preset model] based on each pedestrian image in the sample dataset and a class cluster label corresponding to each pedestrian image, see paragraph 0075-0078);
obtaining a second model built by performing unsupervised domain adaptation on the first model (each pedestrian image and its class cluster label or its pseudo label is used to train the second preset model, which can implement unsupervised training, thereby reducing the cost of annotating each pedestrian image, see paragraph 0082);
generating pseudo labels for an evaluation dataset using the second model, wherein the evaluation dataset is an unlabeled dataset (unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space, see paragraph 0091, see also label-free sample dataset in paragraph 0089); and
evaluating performance of the first model using the pseudo labels (unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space. Through the iterative training process in step 204, the second preset model converges, to obtain a first preset model 205, see paragraphs 0091-0092).
Regarding claim 13, the subject matter of the claim is substantially similar to claim 2 and as such the same rationale of rejection applies.
Regarding claim 14, the subject matter of the claim is substantially similar to claim 1 and as such the same rationale of rejection applies.
Claim Rejections - 35 USC § 103
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claim 8 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Li (US 20230359900).
Regarding claim 8, Wang does not disclose wherein the evaluating of the performance of the first model comprises: predicting labels of the evaluation dataset through the first model; and evaluating the performance of the first model by comparing the pseudo labels and the predicted labels.
However, Li discloses wherein a masked self-training (MaST) which is an unsupervised learning approach. The MaST framework employs two complimentary sources of supervision: pseudo-labels generated from an unmasked image and raw image pixels of the masked portion of the raw image. Specifically, MaST jointly optimizes three objectives to finetune a pre-trained classification model on unlabeled images: (1) a self-training objective to learn global task-specific class prediction by comparing pseudo-labels generated from unmasked images and predicted labels from masked images; (2) masked image modeling objective to learn local pixel-level information by comparing predicted pixel values of the masked patches and raw pixel values of the masked patches; (3) global-local feature alignment objective to bridge the knowledge learned from the two sources of supervision (1) and (2) (see paragraph 0018).
The combination of Wang and Li would have resulted in the training of the models to further utilize Li’s teachings of comparing pseudo labels to predicted labels. One would have been motivated to have combined the references as a user of Wang is already interested in providing better models by comparing them to each other. As such, the combination of teachings would have been obvious to one of ordinary skill in the art as the resulting combination would have been predictable.
8. Claim 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Zhang (US 20230010651).
Regarding claim 9, Wang discloses wherein the labeled dataset is a dataset of a source domain, the evaluation dataset is a dataset of a target domain, and the method further comprising:
obtaining a third model trained using a labeled dataset of the source domain (training, [a preset model] based on each pedestrian image in the sample dataset and a class cluster label corresponding to each pedestrian image, see paragraph 0075-0078);
evaluating performance of the third model using the pseudo labels (unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space. Through the iterative training process in step 204, the second preset model converges, to obtain a first preset model 205, see paragraphs 0091-0092); and
Wang does not disclose selecting a model to be applied to the target domain from among the first model and the third model based on results of evaluating the performance of the first model and evaluating the performance of the third model.
However, Zhang discloses wherein If, based on the evaluation at step 210, the performance of the train data-driven model is determined to be relatively poor in comparison to the performance of the optimized sensor fusion model, such as, for example, produces results that are outside of a predetermined range or threshold of the results attained in the evaluation of the optimized sensor fusion model, then the train data-driven model is not selected for possible use in the operation of the robot 106. In such a situation, the optimized sensor fusion model may however remain in consideration for use in the operation of the robot 106 (paragraph 0044).
The combination of Wang and Zhang would have resulted in the training of the models to further utilize Zhang’s teachings of comparing different models to each other. One would have been motivated to have combined the references as a user of Wang is already interested in providing better models by comparing them to each other. As such, the combination of teachings would have been obvious to one of ordinary skill in the art as the resulting combination would have been predictable.
Regarding claim 10, Wang discloses wherein the labeled dataset is a dataset of a first source domain, the evaluation dataset is a dataset of a target domain, and the method further comprising:
obtaining a third model trained using a labeled dataset of a second source domain (each pedestrian image and its class cluster label or its pseudo label is used to train the second preset model, which can implement unsupervised training, thereby reducing the cost of annotating each pedestrian image, see paragraph 0082);
evaluating performance of the third model using the pseudo labels (unsupervised contrastive training step 204: training the second preset model according to the pseudo label assigned to each image in step 203 and a loss function. The loss function constrains the images in the same class cluster to be close to each other in the feature space, and the images in different class clusters to be away from each other in the feature space. Through the iterative training process in step 204, the second preset model converges, to obtain a first preset model 205, see paragraphs 0091-0092); and
Wang does not disclose selecting a model to be applied to the target domain from among the first model and the third model based on results of evaluating the performance of the first model and evaluating the performance of the third model.
However, Zhang discloses wherein If, based on the evaluation at step 210, the performance of the train data-driven model is determined to be relatively poor in comparison to the performance of the optimized sensor fusion model, such as, for example, produces results that are outside of a predetermined range or threshold of the results attained in the evaluation of the optimized sensor fusion model, then the train data-driven model is not selected for possible use in the operation of the robot 106. In such a situation, the optimized sensor fusion model may however remain in consideration for use in the operation of the robot 106 (paragraph 0044).
The combination of Wang and Zhang would have resulted in the training of the models to further utilize Zhang’s teachings of comparing different models to each other. One would have been motivated to have combined the references as a user of Wang is already interested in providing better models by comparing them to each other. As such, the combination of teachings would have been obvious to one of ordinary skill in the art as the resulting combination would have been predictable.
10. Claim 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Rossi (US 20220229721).
Regarding claim 11, Wang does not disclose wherein the evaluation dataset is a more recently generated dataset than the labeled dataset.
However, Rossi discloses wherein to appropriately select an outlier-detection program that will provide the best performance for a given dataset, the user must already know a subset of outliers in the new dataset for evaluation of the available outlier-detection programs, which requires at least some of the data entries in the dataset to be labeled as outlier (paragraph 0004).
The combination of Wang and Rossi would have resulted in the training of the models to further utilize Rossi’s teachings of comparing different models to each other. One would have been motivated to have combined the references as a user of Wang is already interested in providing better models by comparing them to each other. As such, the combination of teachings would have been obvious to one of ordinary skill in the art as the resulting combination would have been predictable.
Wang discloses wherein the method further comprising determining that the first model needs to be updated in response to a determination that the evaluated performance does not satisfy a predetermined condition (according to the processing results from the first preset model and a loss function corresponding to the first preset model, a function value of the loss function is computed. In addition, the first preset model is updated based on the function value of the loss function, until the first preset model meets a convergence condition, for example, a number of updates reaches a first preset threshold, the function value of the loss function is less than a second preset threshold, or the function value of the loss function no longer changes, and then the converged first preset model is determined as the pedestrian re-identification model that can be used to complete a pedestrian re-identification task, see paragraph 0052).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID E CHOI whose telephone number is (571)270-3780. The examiner can normally be reached on M-F: 7-2, 7-10 (PST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bechtold, Michelle T. can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID E CHOI/Primary Examiner, Art Unit 2148