Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to the application filed 09 May 2023. Claims 1-21 are pending and have been examined.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09 May 2023 is being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-7, 9, and 11-14, and 16-21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li, et al. (US 2021/0089883 A1, hereinafter "Li-1").
Regarding Claim 1, Li-1 teaches:
A processor-implemented method (Li-1, [0024]: "In some examples, memory 120 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform the counting methods described in further detail herein"), the method comprising:
determining a prediction loss based on class prediction data obtained by applying a first machine learning model to a training input and a class label with which the training input is labeled (Li-1, [0030]: "the warm up process is performed on the two networks for a few (e.g., 3-5) epochs by training on all data of the dataset using a cross-entropy loss. A cross-entropy loss
l
θ
may be used to indicate how well the model fits the training samples. In some examples, a standard cross-entropy loss may be determined as follows:
l
θ
=
{
l
i
}
i
=
1
N
=
-
∑
c
=
1
C
y
i
c
log
p
m
o
d
e
l
c
x
i
;
θ
i
=
1
N
(1)
where
p
m
o
d
e
l
c
is the model's output softmax probability for class c,
D
=
X
,
Y
=
{
χ
i
,
y
i
}
i
=
1
N
denotes the training data,
χ
i
is a sample (e.g., an image), and
y
∈
{
0,1
}
c
is the one-hot label over C classes,
θ
denotes the model parameters," where Li's first of two models corresponds to the instant first model, and where Li's cross-entropy loss is calculated as a sum over all
N
training inputs);
determining a confidence of the class label based on the determined prediction loss (Li-1, [0030]: "
p
m
o
d
e
l
c
is the model's output softmax probability for class c" where Li-1's probability for class corresponds to the instant confidence, which Li-1 corrects for as being over-confident, [0031]: "while the warm up process using the standard cross-entropy loss as computed using equation (1) may be effective for symmetric (e.g., uniformly random) label noise, such a warm up process ... may quickly overfit to noise during warm up and produce over-confident (low entropy) predictions"); and
training a second machine learning model using the training input based on the determined confidence (Li-1, Fig. 2, step 204, "Perform a warm up process to 1st and 2nd networks including applying a confidence penalty for asymmetric noise" and step 206, "At
i
t
h
epoch, for each of the first and second networks, model per-sample loss with one network to generate clean probability for the other network," where Li's per-sample loss corresponds to the instant training input).
Regarding Claim 13, Li-1 teaches:
A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors (Li-1, [0025]: "One or more of the processes 202-222 of method 200 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the processes") to perform the method of claim 1.
Regarding Claim 14, Li-1 teaches:
An electronic apparatus (Li-1, Fig. 1, Computing Device 100) comprising: one or more processors (Li-1, Fig. 1, Processor 110) configured to to perform the method of claim 1.
Regarding Claim 3, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the determining of the confidence comprises determining a confidence that represents a probability that the class label is identical to a real class of the training input (Li-1, [0030]: "A cross-entropy loss
l
θ
may be used to indicate how well the model fits the training samples. ... where
p
m
o
d
e
l
c
is the model's output softmax probability for class c" and [0032]: "at block 204, the warm up process may apply a confidence penalty for asymmetric noise, for example, by adding a negative entropy term,
-
H
, to the cross-entropy loss
l
θ
," where Li-1's softmax probability corresponds to the instant confidence probability).
Regarding Claim 4, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the determining of the confidence comprises:
determining the confidence based on a reference loss determined based on another training input and the determined prediction loss (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," line 4, showing for a given training epoch calculation of the per-sample loss of the first model, including the confidence penalty term, as in:
4
W
2
=
GMM
X
,
Y
,
θ
1
// model per-sample loss with
θ
1
to obtain clean probability for
θ
2
where the model parameters for the current epoch had been updated in the prior epoch according to the total loss line 27:
26
L
=
L
X
+
λ
u
L
U
+
λ
r
L
r
e
g
// total loss
27
θ
k
=
SGD
L
,
θ
k
// update model parameters
," where Li's total loss corresponds to the instant reference loss); and
updating the reference loss based on the determined prediction loss (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," line 26, showing calculation of the total loss for a given epoch according to loss values calculated using the first model during the current epoch).
Regarding Claim 5, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the determining of the prediction loss comprises:
determining a first prediction loss based on first class prediction data obtained by applying the first machine learning model and the class label (Li-1, [0030]: "The method 200 may proceed to block 204, where the processor may perform a warm up process to the two networks to update the model parameters. ... In some examples, a standard cross-entropy loss may be determined as follows:
l
θ
=
{
l
i
}
i
=
1
N
=
-
∑
c
=
1
C
y
i
c
log
p
m
o
d
e
l
c
x
i
;
θ
i
=
1
N
(1)
where
p
m
o
d
e
l
c
is the model's output softmax probability for class c,
D
=
X
,
Y
=
{
χ
i
,
y
i
}
i
=
1
N
denotes the training data,
χ
i
is a sample (e.g., an image), and
y
∈
{
0,1
}
c
is the one-hot label over C classes,
θ
denotes the model parameters," where Li's warm up computes according to first model
θ
1
and class labels
Y
in Fig. 5:
2
θ
1
,
θ
2
=
WarmUp
X
,
Y
,
θ
1
,
θ
2
// standard training (with confidence penalty)
; and
determining a second prediction loss based on second class prediction data obtained by applying the second machine learning model to the training input and the class label (Li-1, [003] and Fig. 5 as cited for the previous limitation, for second model
θ
2
and class labels
Y
) and,
the determining of the confidence comprises:
determining the confidence based on the determined first prediction loss and the determined second prediction loss (Li-1, [0032]: "at block 204, the warm up process may apply a confidence penalty for asymmetric noise, for example, by adding a negative entropy term,
-
H
, to the cross-entropy loss," where's Li's confidence penalty term is determined according to the losses of the first and second models").
Regarding Claim 6, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the training of the second machine learning model comprises
updating a parameter of the second machine learning model using second class prediction data obtained by applying the second machine learning model to the training input, the class label, and a loss function of the second machine learning model determined based on the determined confidence (Li-1, Fig. 2, step 214, "Train 2nd network while keeping the 1st network fixed" and step 218, "At each batch of the
i
t
h
epoch, perform a mix-match training process to update the model parameters of the 2nd network using the labeled and unlabeled training sets" and Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," line 27:
27
θ
k
=
SGD
L
,
θ
k
// update model parameters
," where Li's loss function
L
corresponds to the instant loss function of the second model).
Regarding Claim 7, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the training of the second machine learning model further comprises
updating a parameter of the second machine learning model using a loss function of the second machine learning model from which a symmetric loss function is excluded (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," line 27, showing the second model updated when
k
=
2
according to total loss comprising additional loss terms:
26
L
=
L
X
+
λ
u
L
U
+
λ
r
L
r
e
g
// total loss
27
θ
k
=
SGD
L
,
θ
k
// update model parameters
," and [0050]: "At block 418, a total loss is generated using the mixed data. The total loss L may include a supervised loss
L
x
, an unsupervised loss
L
u
, and a regulation loss
L
r
e
g
. An example supervised loss includes the cross-entropy loss ... An example unsupervised loss includes a mean squared error ... An example regulation loss may be computed as ...," where Li's unsupervised, mean-square loss corresponds to the instant loss excluding symmetric loss).
Regarding Claim 9, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the training of the second machine learning model comprises:
relabeling the training input with a class label based on the determined confidence being less than or equal to a threshold (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," where the instant relabeling corresponds to Li-1's refining at line 18, "refine ground-truth label guided by the clean probability produced by the other network," where the probability threshold of line 1, Input: "clean probability threshold
τ
" is used to determine the training batches at lines 7-8) confidence (Li-1, [0036]: "the labeled training set
X
1
includes clean samples (and their labels) each having a clean probability equal to or greater than the clean probability threshold
τ
. The unlabeled training set
U
1
includes dirty samples (without labels) each having a clean probability less than the clean probability threshold
τ
); and
training the second machine learning model based on the training input and the class label with which the training input is relabeled (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," lines 23-27, showing the second model being updated by gradient descent according to losses calculated according to the label refinement of lines 17-21).
Regarding Claim 11, the rejection of Claim 9 is incorporated. Li-1 teaches:
wherein the relabeling of the training input with the class label comprises:
determining a threshold confidence based on a number of times the training input is relabeled (Li-1, Fig. 5, Algorithm 1, "DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," line 3 "while
e
<
MaxEpoch
do" and line 4, "// model per-sample loss with
θ
1
to obtain clean probability for
θ
2
" and line 7, "labeled training set for
θ
k
," where the probability of line 4 used at line 7 is determined only when the number of iterations of the entire block, including re-labeling, has happened fewer than MaxEpoch times); and
relabeling the training input with the class label in response to the determined confidence being less than or equal to the determined threshold confidence (Li-1, Fig. 5, Algorithm 1, "DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," line 8, "unlabeled training set for
θ
k
," where the unlabeled training set is determined according to probability less than
τ
, and line 20, "co-guessing: average the predictions from both networks across augmentations of
u
b
").
Regarding Claim 12, the rejection of Claim 9 is incorporated. Li-1 teaches:
wherein the relabeling of the training input with the class label comprises relabeling the training input based on the class prediction data obtained using the first machine learning model (Li-1, Fig. 3, step 306, depicting both model A and model B training on training samples refined by the other model, and [0042]: "The dataflow 300 further includes a current epoch unit 306 coupled to the co-divide unit 304. In the current epoch unit 306, at each batch (also referred to as mini-batch) of the epoch e, each of first network A and second network B performs semi-supervised training using a mix-match method. During the mix-match method, label co-refinement on the labeled samples and label co-guessing are performed on the unlabeled samples, where co-refinement and co-guessing use information from both first network A and second network B").
Claims 16-20 incorporate substantively all the limitations of Claims 3-5, 7, and 9, respectively, in electronic apparatus form and are rejected under the same rationales.
Regarding Claim 21, Li-1 teaches:
A processor-implemented method (Li-1, [0024]: "In some examples, memory 120 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform the counting methods described in further detail herein"), the method comprising:
determining class prediction data by applying a trained second machine learning model to input data (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," lines 6-29, when
k
=
2
, and where model two is applied to batch data at lines 17 and 20); and
classifying the input data based on the determined class prediction data (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," lines 6-29, when
k
=
2
, where model two is applied to batch data at lines 17 and 20, and where
y
^
b
and
q
b
correspond to the model's labels), wherein the second machine learning model is trained by determining a prediction loss based on training class prediction data obtained by applying a first machine learning model to a training input and a class label with which the training input is labeled (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," lines 7-8, where the training sets for model two are determined according to the probability of line 4, determined according to the first model
θ
1
, and where loss is calculated at line 26).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li, et al. (US 2021/0089883 A1, hereinafter "Li-1") in view of Ghosh, et al., "Robust Loss Functions under Label Noise for Deep Neural Networks" (hereinafter "Ghosh").
Regarding Claim 2, the rejection of Claim 1 is incorporated. Li-1 teaches:
wherein the first machine learning model is trained using a symmetric loss function used to determine a sum of values of the symmetric loss function ... (Li-1, [0031]: "while the warm up process using the standard cross-entropy loss as computed using equation (1) may be effective for symmetric (e.g., uniformly random) label noise, such a warm up process may not be effective for asymmetric (e.g. class-conditional) label noise," where Li's cross-entropy loss corresponds to the instant symmetric loss, and where Li's Eq. 1 above calculates loss according to a sum of values), in which the values are determined in response to a prediction that a training input is classified as each of a plurality of classes (Li-1, [0030]: "A cross-entropy loss
l
θ
may be used to indicate how well the model fits the training samples. In some examples, a standard cross-entropy loss may be determined as ... [Eq. 1] ... where
p
m
o
d
e
l
c
is the model's output softmax probability for class c,
D
=
X
,
Y
=
{
χ
i
,
y
i
}
i
=
1
N
denotes the training data,
χ
i
is a sample (e.g., an image), and
y
∈
{
0,1
}
c
is the one-hot label over C classes,
θ
denotes the model parameters," where Li's first of two models corresponds to the instant first model, and where Li's cross-entropy loss is calculated as a sum over all
N
training inputs).
Li-1 teaches the first machine learning model is trained using a symmetric loss function used to determine a sum of values of the symmetric loss function.
Li does not explicitly teach determine a sum of values of the symmetric loss function as a constant.
However, Ghosh teaches:
determine (Ghosh, p. 1923, Empirical Results: "we illustrate the robustness of symmetric loss functions. We present results with two image data sets and four text data sets. In each case we learn a neural network classifier using the CCE [categorical cross entropy], MSE [mean square error] and MAE [mean absolute error] loss functions. We add symmetric or class conditional noise with different noise rates to the training set" and p. 1922, Some Loss Functions for Neural Networks: "among these, only MAE satisfies symmetry condition given by Eq.(2)") a sum of values of the symmetric loss function as a constant (Ghosh, p. 1921, Theoretical Results: "We call a loss function L symmetric if it satisfies, for some constant
C
,
∑
i
=
1
k
L
f
x
,
i
=
C
,
∀
x
∈
X
,
∀
f
.
(2)
,").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Li regarding the first machine learning model is trained using a symmetric loss function used to determine a sum of values of the symmetric loss function with those of Ghosh regarding determine a sum of values of the symmetric loss function as a constant.
The motivation to do so would be to facilitate training multiclass classifiers robust against label noise (Ghosh, p. 1919, Abstract: "we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. ... We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise" and p. 1923, Results and Discussion: "As the graphs in Fig. 1(a)-(c) show, MAE loss is highly robust to symmetric label noise").
Claim 15 depends from incorporates substantively all the limitations of Claim 2 in electronic apparatus form and is rejected under the same rationale.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Li, et al. (US 2021/0089883 A1, hereinafter "Li-1") in view of Yao, et al., "Jo-SRC: A Contrastive Approach for Combating Noisy Labels" (hereinafter "Yao").
Regarding Claim 8, the rejection of Claim 1 is incorporated. Li-1 teaches:
updating a parameter of the first machine learning model using a loss function of the first machine learning model determined based on ... parameters of the first machine learning model and the second machine learning model (Li-1, Fig. 5, "Algorithm 1: DivideMix. Line 4-8: co-divide; Line 17-18: label co-refinement; Line 20: label co-guessing," lines 4-5, showing probability parameters for models one and two being calculated according to the other model's parameters, line 7, showing a training batch for each model being determined according to the model's probability parameter, and line 27, showing each model being updated according to a loss calculated using the training batch).
Li-1 teaches updating a parameter of the first machine learning model using a loss function of the first machine learning model determined based on parameters of the first machine learning model and the second machine learning model.
Li does not explicitly teach a loss function ... determined based on a difference between parameters of the first machine learning model and the second machine learning model.
However, Yao teaches:
a loss function ... determined based on a difference between parameters of the first machine learning model and the second machine learning model (Yao, p. 5196, Algorithm 1: Jo-SRC, line 12, "Update
θ
←
θ
-
η
∇
L
" and line 17, "Update
θ
m
t
," where the loss function
L
of Yao's model
θ
is based on the label produced by the parameters of the mean-teacher model, as in p. 5195, 3.3. Label reassignment: "given an ID sample
x
i
, its pseudo label distribution is provided as:
y
~
i
c
=
p
c
x
i
,
θ
m
t
(6)
where
θ
m
t
denotes parameters of the mean-teacher model" and where the mean-teacher model is updated according to a decay-rate difference, as in p. 5195, 3.3. Label reassignment: "parameters
θ
m
t
is an exponential moving average of
θ
. Specifically, given a decay rate
ω
∈
0,1
,
θ
m
t
is updated in each training step as follows:
θ
m
t
←
ω
θ
m
t
+
1
-
ω
θ
(8)
").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Li regarding updating a parameter of the first machine learning model using a loss function of the first machine learning model determined based on parameters of the first machine learning model and the second machine learning model with those of Yao regarding a loss function determined based on a difference between parameters of the first machine learning model and the second machine learning model.
The motivation to do so would be to facilitate semi-supervised training of classification models using noisy labels to achieve improved accuracy and higher generalization (Yao, p. 5195, 3.3. Label reassignment: "For samples in ID subset
S
i
d
, inspired by the mean-teacher model [35], we use the temporally averaged model (i.e. mean-teacher model) to generate reliable pseudo label distributions for providing supervision" and p. 5193, 1. Introduction: "ID and OOD noisy samples are re-labeled by a mean-teacher model before they are back-propagated for updating network parameters.... We propose a simple yet effective contrastive approach approach named Jo-SRC to alleviate the negative effect of noisy labels. Jo-SRC trains the network with a joint loss, including a cross-entropy term and a consistency term, to obtain higher classification and generalization performance").
Claims 10 are rejected under 35 U.S.C. 103 as being unpatentable over Li, et al. (US 2021/0089883 A1, hereinafter "Li-1") in view of Li, et al. (US 2021/0374553 A1, hereinafter "Li-2").
Regarding Claim 10, the rejection of Claim 9 is incorporated.
Li-1 teaches training the second machine learning model based on the training input and the class label with which the training input is relabeled.
Li-1 does not explicitly teach wherein the relabeling of the training input with the class label comprises relabeling the training input with the class label based on a user input for relabeling the training input.
However, Li-2 teaches:
wherein the relabeling of the training input with the class label comprises relabeling the training input with the class label based on a user input for relabeling the training input (Li-2, Fig. 5C, step 528, "Generate an Updated Training Set by Replacing the Noisy Labels with the Pseudo-Labels of the Training Samples" and [0058]: "Method 500 starts with step 502, at which a training set of data samples may be obtained, each data sample having a noisy label. For example, the training set of data samples may be received as part of input 340 via the data interface 315 in FIG. 3" and [0049]: "the noise-robust contrastive learning module 330 may be used to receive and handle the input 340 via a data interface 315. For example, the input 340 may include an image uploaded by a user via a user interface, a dataset of training images received via a communication interface, etc.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Li-1 regarding training the second machine learning model based on the training input and the class label with which the training input is relabeled with those of Li-2 regarding wherein the relabeling of the training input with the class label comprises relabeling the training input with the class label based on a user input for relabeling the training input.
The motivation to do so would be to facilitate usage of the model in evaluation of a system or model (Li-2, [0049]: "the input 340 may include an image uploaded by a user via a user interface, a dataset of training images received via a communication interface, etc. The noise-robust contrastive learning module 330 may generate an output 350, e.g., such as a class label corresponding to the input image. In some examples, the noise-robust contrastive learning module 330 may also handle the iterative training and/or evaluation of a system or model").
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Li, et al., "Dividemix: Learning with noisy labels as semi-supervised learning," teaches a framework for semi-supervised learning with noisy labels by modeling the loss distribution to divide training data into labeled and unlabeled sets, and training the model using the sets.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT N DAY whose telephone number is (703)756-1519. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.N.D./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122