Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to under 37 CFR 1.84(a)(1). Black and white drawings should be submitted as the original drawings submitted on 3/10/2023 are in grayscale.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2, 8, 11, 12, and 13 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 2, the phrase "the training example is set higher the more strongly the respective classification scores" renders the claim indefinite because the scope of the claim is unascertainable, therefore, indefinite. The phrase is unclear to what more strongly means and one within the art would not know what more strongly would be. Also the claim uses and/or as recited “the training example spread and/or deviate from classification scores for the training example” rending the claim indefinite.
Regarding claim 8, the limitations below use and/or:
objects recognized in the image, and/or features, defects, or damages recognized in the image,
and/or an overall rating of a scenery shown in the image;
and/or a quality rating of a finished product shown in the image.
The use of “and/or” render the claim indefinite because the scope of the claim is unascertainable, therefore, indefinite. The use of or can include all of the limitations stated above. The and/or does not make it clear as to what the limitation is for representing the predetermine classification.
Regarding claim 11, the phrase "a more balanced distribution of numbers of the training examples" renders the claim indefinite because the scope of the claim is unascertainable, therefore, indefinite. The phrase is unclear to what a more balanced distribution would be.
Regarding claim 12, the limitation “wherein the classifier is re-trained and/or further trained” uses “and/or” rendering the claim indefinite because the scope of the claim is unascertainable, therefore, indefinite. The use of “or” can include that the classifier is re-trained and further trained, thus it does not make it clear when it will train with the new reduce training data set.
Regarding claim 13, the limitations below use and/or:
controlling, using the control signal: a vehicle,
and/or an area monitoring system
and/or a quality control system
and/or a medical imaging system.
The use of “and/or” render the claim indefinite because the scope of the claim is unascertainable, therefore, indefinite. The use of or can include all of the limitations stated above. The and/or does not make it clear as to what the limitation is for controlling using a control single for the limitations specify different uses.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The claim(s) recite(s) significantly more. The subject matter eligibility test for products and process is describe below for claim 1 in view of dependent claims.
Regarding claim 1:
Step 1: Is the claim to a process machine manufacture or composition of matter?
Yes – Claim 1 recites a method and that falls under the statutory categories.
Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes – The claim recites the following:
“A method for prioritizing training examples in a training data set for a classifier” - The limitations of claim 1 recites a mental process of prioritizing training examples (see MPEP 2106.04(a)(2)III).
“determining a priority of the training example to which the modifications belong from a distribution of the respective classification scores.” - The limitations of claim 1 recites a mental process of determining a priority (see MPEP 2106.04(a)(2)III).
Step 2 Prong 2: Does the claim recite additional elements that integrate the judicial exception into a particular application? No –
The claim includes the additional element(s):
“configured to map measurement data to classification scores with respect to classes of a predetermined classification, comprising the following steps:”
The additional elements fall under “apply it” as using a generic computer to implement a classifier (see MPEP 2106.05(f)).
“training the classifier with the training examples from the training data set;”
The additional elements fall under “apply it” as using a generic computer to train the classifier. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
“generating modifications for at least one training example of the training examples;”
The additional elements fall under “apply it” as using a generic computer to generate modifications for at least one training example. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
“determining respective classification scores for the modifications using the classifier;”
The additional elements fall under “apply it” as using a generic computer to use the classifier to determine a respective classification score. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
No - The claim does not include additional elements that are sufficient to amount to a significantly more than the judicial exemption. As an order whole, the claim is directed prioritizing training examples to train a classifier. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of determine, generating and training fall under using generic computer to apply an exemption. The method does not improve on the function of a computer, transforms an article into another article, nor is it applied by a particular machine, making the claim not patent eligible.
Regarding claim 2:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 1, wherein the priority of the training example is set higher the more strongly the respective classification scores determined for the modifications of the training example spread and/or deviate from classification scores for the training example.”
The additional elements fall under “apply it” as using a generic computer to set higher the more strongly the priority of training examples. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
Regarding claim 3:
Step 2A Prong 1:
“The method according to claim 1, wherein the priority is set using an entropy determined from the respective classification scores for the modifications.” – The limitations recites a mathematical calculation for entropy to determine priority (see MPEP 2106.04(a)(2)I).
Step 2A Prong 2, Step 2B: The additional element(s):
No additional elements. The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application
Regarding claim 4:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 1, wherein the training of the classifier and the generation of the modifications are coordinated with one another such that modifying the training example alters at least one characteristic of the training example with respect to which the trained classifier is not invariant.”
The additional elements fall under “apply it” as using a generic computer to coordinate between generation and training of the classifier. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
Regarding claim 5:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 1, wherein at least one of the modifications is generated using a generative model conditioned to the training example.”
The additional elements fall under “apply it” as using a generic computer to use a generative model . See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
Regarding claim 6:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 1, wherein at least one of the modifications is generated by: converting the training example into a representation with reduced dimensionality using an encoder of an encoder/decoder assembly trained as an autoencoder, drawing a sample from a neighborhood of the representation, and converting the sample into the modification using a decoder of the encoder/decoder assembly.”
The additional elements fall under “apply it” as using a generic computer to use implement an autoencoder to covert training examples into a reduce dimension, drawing a sample, modify using a decoder. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
Regarding claim 7:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 1, wherein the training examples include images.”
The additional elements fall under Insignificant Extra-Solution Activity. See MPEP 2106.5(g).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Regarding claim 8:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 7, wherein the classes of the predetermine classification represent: objects recognized in the image, and/or features, defects, or damages recognized in the image, and/or an overall rating of a scenery shown in the image; and/or a quality rating of a finished product shown in the image.”
The additional elements fall under Insignificant Extra-Solution Activity. See MPEP 2106.5(g).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Regarding claim 9:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim1, wherein training examples of the training data set are selected using the determined priorities, and the selected training examples are included in a new, reduced training data set.”
The additional elements fall under “apply it” as using a generic computer to obtain select training data using the determined priorities. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Regarding claim 10:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 9, wherein: training examples of the training data set whose priorities satisfy a predetermined criterion are selected,”
The additional elements fall under “apply it” as using a generic computer to select training data using a predetermined criterion. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
“further training examples are randomly selected from the training data set and are included in the new, reduced data set.”
The additional elements fall under Insignificant Extra-Solution Activity as mere data gathering to create a new reduce data set. See MPEP 2106.5(g).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Regarding claim 11:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 9, wherein a more balanced distribution of numbers of the training examples over the available classes is established in the new, reduced training data set than is present in the training data set.”
The additional elements fall under Insignificant Extra-Solution Activity. See MPEP 2106.5(g).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Regarding claim 12:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 9, wherein the classifier is re-trained and/or further trained with the new, reduced training data set.”
The additional elements fall under “apply it” as using a generic computer to retrain or train using the new reduce data set. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Regarding claim 13:
Step 2A Prong 2, Step 2B: The additional element(s):
“The method according to claim 12, further comprising: supplying measurement data recorded by at least one sensor to the re-trained and/or further trained classifier;”
The additional elements fall under Insignificant Extra-Solution Activity as mere data gathering by supplying measurement data. See MPEP 2106.5(g).
“determining a control signal from output of the classifier;”
The additional elements fall under “apply it” as using a generic computer to determine a control singled form the classifier. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
“controlling, using the control signal: a vehicle, and/or an area monitoring system and/or a quality control system and/or a medical imaging system.”
The additional elements fall under “apply it” as using a generic computer to determine to control using a the control signal. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)).
The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application.
Claims 14 recite a computer readable medium product and are analogous to the method of claims 1. Therefore, the rejections of claim 1 above applies to claims 14.
Claims 15 recites a system and are analogous to the method of claims 1. Therefore, the rejections of claim 1 above applies to claims 15.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-9, 12, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan et al. (20220084310A1) (“Thyagharajan”) in view of Kong, Quan, et al. "Active generative adversarial network for image classification." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019 (“Kong”).
Regarding claim 1 and analogous claims 14 and 15, Thyagharajan teaches a method for prioritizing training examples in a training data set for a classifier configured to map measurement data to classification scores with respect to classes of a predetermined classification, comprising the following steps (Thyagharajan Para 0030, FIG. 1 is an example overview flow for applying a model's self-confidence to improve training of the model. The initial computer model 100 (and computer model 120) is a multi-class (or multi-label) classification model that outputs a prediction for each of several classifications. For computer vision problems, for example, the multi-label classification may generate a prediction for each of several different types of objects, such as a chair, ball, lamp, car, and so forth. As used herein, "self-confidence" refers to a computer model's estimated prediction of a highest-scoring ( or highest-predicted) class and its relative certainty as measured by the model's estimated likelihood that the input could be another class.
Para 0032 line 17-24, In other embodiments, the initial training period may use a designated percentage or amount of the available training data as the initial training data 110. In general, the initial training period trains the parameters of the initial computer model 100 such that the computer model's predicted classes have learned substantially from the initial training data and are improved relative to initialization of the computer model [with respect to classes of a predetermined classification,].
Para 0041, In the embodiment shown in FIG. 4, the training space is analyzed to identify classes to include or remove in a modified training space 440. In one embodiment, the regions associated with each class is identified, and the number ( or a proportion) of regions having low confidence scores is determined and used to modify the training space 400. Classes with more or fewer regions of relatively high or relatively low confidence may be identified and used to generate 430 the modified training space 440 and focus training on classes having more regions of low confidence
Para 0065, FIG. 7 shows an example flow for training a multi-label classifier using self-confidence according to one embodiment. As shown in FIG. 7, the computer model may be initially trained 700 during an initial training period to initialize the class predictions to be further refined based on the model's confidence. To further train the model, a training space including regions is identified 710 and the model is applied to the regions to determine class predictions and a respective confidence score for each region. Using the confidence scores, the model may be further trained a subsequent period based on the confidence scores as discussed in detail above. As two examples, a training space may be modified 732 to include low-confidence region subsets and may exclude high-confidence region subsets from the modified training space, such that the modified training space may emphasize further training in areas of low confidence [A method for prioritizing training examples in a training data set for a classifier configured to map measurement data to classification scores ]):
training the classifier with the training examples from the training data set (Para 0032 line 14-20 In one embodiment, the initial training period continues until the model parameters have stabilized and are not significantly changing after each batch of training. In other embodiments, the initial training period may use a designated percentage or amount of the available training data as the initial training data 110.
Para 0034 line 1-3, FIG. 2 shows an example class prediction for a region of a space by a multi-label classification computer model [training the classifier], according to one embodiment. As shown in this example, a space 200 is a three-dimensional space in which multiple objects are included. The space 200 may be represented as a pointcloud, individual voxels, and any other suitable representation. As noted above, spaces may include two- or three-dimensional areas according to the configuration of the computer model and its application. As noted above, the computer model 220 receives a region 210 of the input for evaluation. For example, the model may attempt to label or classify individual pixels, voxels, or other discrete spaces [with the training examples from the training data set].);
Thyagharajan does not explicitly teach generating modifications for at least one training example of the training examples;
determining respective classification scores for the modifications using the classifier;
and determining a priority of the training example to which the modifications belong from a distribution of the respective classification scores.
However Kong teaches generating modifications for at least one training example of the training examples (Kong Page 4093, Given a set of labeled images
(
x
1
,
y
1
,
…
,
(
x
N
,
y
N
)
}
, the AC-GAN model is used to generate labeled samples with the inputs of both a noise latent variable and a one-hot representation of a class label. In the AC-GAN, the generator G generates a synthetic sample
x
i
^
=
G
(
z
,
y
i
)
[generating modifications] with the noise latent vector z and a label
y
i
. [at least one training example of the training examples]);
determining respective classification scores for the modifications using the classifier (Kong page 4092,
PNG
media_image1.png
396
827
media_image1.png
Greyscale
[using the classifier]
page 4093, In this subsection, we discuss how the degree of uncertainty is measured in the proposed model. Among the samples generated by the AC-GAN model, only informative samples might be able to contribute to improving classification performance. In the area of active learning, uncertainty sampling is the most widely used query strategy. The intuition behind uncertainty sampling is that if a sample is highly uncertain with a hyper-plane of a classifier, obtaining its label will improve the degree of discrimination among classes. In other words, this sample is considered to be informative in improving the classification performance. In our model, we use SVM as the classifier. In our paper, we mainly use two metrics based on the label probabilities to measure the uncertainty of a sample.
Page 4093, Loss on uncertainty para 2, Policy gradient (Sutton et al. 2000) has been successfully applied in reinforcement learning to learn an optimal policy. As one target of this work is to guide the generator to synthesize informative samples, we regard the degree of uncertainty and the generated samples as reward and action, respectively. In general, the higher the degree of uncertainty is, the higher the reward is obtained. If a generated sample has a high degree of uncertainty, this sample is encouraged to be generated with a high probability. To the best of our knowledge, we are the first to use the idea from policy gradient to model the degree of uncertainty in active learning [determining respective classification scores for the modifications]);
and determining a priority of the training example to which the modifications belong from a distribution of the respective classification scores (Kong Page 4091,
PNG
media_image2.png
565
1016
media_image2.png
Greyscale
[Examiner Note: The discriminator determines the probability class distribution and the real/fake results will determine the adversarial loss that will affect the generator to produce different modifications.]
Page 4093, Loss on uncertainty para 2, Policy gradient (Sutton et al. 2000) has been successfully applied in reinforcement learning to learn an optimal policy. As one target of this work is to guide the generator to synthesize informative samples, we regard the degree of uncertainty and the generated samples as reward and action, respectively. In general, the higher the degree of uncertainty is, the higher the reward is obtained. If a generated sample has a high degree of uncertainty, this sample is encouraged to be generated with a high probability. To the best of our knowledge, we are the first to use the idea from policy gradient to model the degree of uncertainty in active learning [and determining a priority of the training example]).
Thyagharajan and Kong are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Thyagharajan in view of Kong to incorporate generating samples and modifying the samples. Doing so to improve learning with limited labels without the need of human involvement (Kong page 4090 Abstract, Sufficient supervised information is crucial for any machine learning models to boost performance. However, labeling data is expensive and sometimes difficult to obtain. Active learning is an approach to acquire annotations for data from a human oracle by selecting informative samples with a high probability to enhance performance. In recent emerging studies, a generative adversarial network (GAN) has been integrated with active learning to generate good candidates to be presented to the oracle. In this paper, we propose a novel model that is able to obtain labels for data in a cheaper manner without the need to query an oracle. In the model, a novel reward for each sample is devised to measure the degree of uncertainty, which is obtained from a classifier trained with existing labeled data. This reward is used to guide a conditional GAN to generate informative samples with a higher probability for a certain label. With extensive evaluations, we have confirmed the effectiveness of the model, showing that the generated samples are capable of improving the classification performance in popular image classification tasks.
Introduction para 1, Machine learning models including traditional ones and new emerging deep neural networks require sufficient supervised information, i.e., class labels, to achieve fair performance. In situations in which labeled data is expensive or difficult to obtain, these models degenerate in performance. Active learning (Settles 2009a) is proposed for handling such a problem. It aims to find the best approach to leverage a limited number of labeled data and to reduce the cost of data annotation. Active learning selects informative samples from a pool of unlabeled data and obtains their labels by involving a human oracle. In this paper, we investigate the problem of lack of labeled data from a new and different perspective. We propose a model to improve learning performance, which is able to make use of limited labeled data without using any additional unlabeled data nor involving any human oracle to acquire labels.
As to a classification model, informative samples are those that are able to better contribute to improving classification performance than other samples).
Regarding claim 2, as best understood based on the 112(b) issues mentioned above, Thyagharajan and Kong teach the method according to claim 1.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Kong further teaches wherein the priority of the training example is set higher the more strongly the respective classification scores determined for the modifications of the training example spread and/or deviate from classification scores for the training example (Kong page 4093 Measure of uncertainty, In this subsection, we discuss how the degree of uncertainty is measured in the proposed model. Among the samples generated by the AC-GAN model, only informative samples might be able to contribute to improving classification performance. In the area of active learning, uncertainty sampling is the most widely used query strategy. The intuition behind uncertainty sampling is that if a sample is highly uncertain with a hyper-plane of a classifier, obtaining its label will improve the degree of discrimination among classes. In other words, this sample is considered to be informative in improving the classification performance. In our model, we use SVM as the classifier. In our paper, we mainly use two metrics based on the label probabilities to measure the uncertainty of a sample [determined for the modifications of the training example spread and/or deviate from classification scores for the training example.].
Smallest Margin Margin sampling is an uncertainty sampling method in the case of multi-class (Settles 2009a), which is defined as
PNG
media_image3.png
36
327
media_image3.png
Greyscale
where
y
1
'
and
y
2
'
are the first and second most probable class labels of a generated sample
x
^
i
under the specified classifier, respectively. Intuitively, samples with large margins are easy, since the classifier has little doubt in differentiating between the two most likely class labels. Samples with small margins are more ambiguous, thus knowing the true label will help the model to discriminate more effectively between them [wherein the priority of the training example is set higher the more strongly the respective classification scores].
Label Entropy A more general uncertainty sampling strategy uses the entropy of posterior probabilities over class labels. In smallest margin, posterior probabilities of labels other than the two most probable class labels are simply ignored. To mitigate this problem, the entropy over all class labels is used, which is formulated as
PNG
media_image4.png
39
348
media_image4.png
Greyscale
).
Regarding claim 3, Thyagharajan and Kong teach the method according to claim 1.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Kong further teaches wherein the priority is set using an entropy determined from the respective classification scores for the modifications (Kong para 4093, loss on uncertainty,
PNG
media_image5.png
272
376
media_image5.png
Greyscale
[wherein the priority is set using an entropy]
PNG
media_image6.png
241
377
media_image6.png
Greyscale
[ determined from the respective classification scores for the modifications]).
Regarding claim 4, Thyagharajan and Kong teach the method according to claim 1.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Kong teaches wherein the training of the classifier and the generation of the modifications are coordinated with one another such that modifying the training example alters at least one characteristic of the training example with respect to which the trained classifier is not invariant (Kong page 4091,Preliminary para 1 line 17-21, In the original GAN model only z is used
to generate samples. In a variation called conditional GAN (CGAN) (Mirza and Osindero 2014), a condition yi, which is a class label of xi, is included in addition to z to control the sample generation. The objective function becomes
PNG
media_image7.png
96
597
media_image7.png
Greyscale
where yi could be a one-hot representation of the class label. During training of the CGAN model, yi is used to instruct the generator G to synthesize samples for this given class [wherein the training of the classifier and the generation of the modifications are coordinated with one another such that modifying the training example alters at least one characteristic of the training example].
Page 4094, However, the generator does not directly provide such a probability for each generated sample. Therefore, we have to estimate this probability based on a model with the parameters _. In our work, we choose a MLP to parameterize the policy. We use a Gaussian distribution over action space, where the covariance matrix was diagonal and independent of the state. The Gaussian MLP maps from the input synthetic image G(z; yi) to the mean _ and standard deviation _ of a Gaussian distribution with the same dimension as z. Thus, the policy is defined by the normal distribution N(_j_; e_). Then we can compute the likelihood P(G(z; yi)j_) with _ and _ from the output of approximated Gaussian MLP. The Gaussian MLP is jointly learned with G and D by policy gradient [example with respect to which the trained classifier is not invariant]).
Regarding claim 5, Thyagharajan and Kong teach the method according to claim 1.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Kong further teaches wherein at least one of the modifications is generated using a generative model conditioned to the training example (Kong Page 4093 Generate of labeled samples Para 2 line 1-6, Given a set of labeled images
(
x
1
,
y
1
,
…
,
(
x
N
,
y
N
)
}
, the AC-GAN model is used to generate labeled samples with the inputs of both a noise latent variable and a one-hot representation of a class label. In the AC-GAN, the generator G generates a synthetic sample
x
i
^
=
G
(
z
,
y
i
)
with the noise latent vector z and a label
y
i
[wherein at least one of the modifications is generated using a generative model].
4093 Generate of labeled samples Para 3, The discriminator D is trained to maximize LD AC-GAN, and the generator G is trained to maximize LG AC-GAN. For the discriminator D, the first two terms in Equation 4 encourage that both real and fake samples are classified correctly [conditioned to the training example]. The last two terms in Equation 4 encourage that both real and fake samples have correct class labels. For the generator G, it is expected that generated samples are classified as fake, and have correct class labels as well.).
Regarding claim 7, Thyagharajan and Kong teach the method according to claim 1.
Thyagharajan teaches wherein the training examples include images (Para 0042 line 1-6 and line 37-43, As discussed with respect to FIGS. 1-3, the current parameters of the model are applied to the training space 400 to generate a set of class predictions C and a confidence score f conf for each region i in the training space 400. For simplicity, FIG. 4 shows modified training space using two classes as examples: the "couch" and the "lamp" classes… In many applications, the evaluation of a region (e.g., a voxel or pixel) may include portions of the space near and around the region itself. For example, to successfully classify a region of 3x3 or 5x5 pixels in an image, the context of the surrounding pixels may also be included in the input to the computer model in predicting the region's classification (i.e. the training examples include images)).
Regarding claim 8, as best understood based on the 112(b) issues mentioned above, Thyagharajan and Kong teach the method according to claim 7.
Thyagharajan teaches wherein the classes of the predetermine classification represent: objects recognized in the image, and/or features, defects, or damages recognized in the image, and/or an overall rating of a scenery shown in the image; and/or a quality rating of a finished product shown in the image (Thyagharajan Para 0032 line 17-24, In other embodiments, the initial training period may use a designated percentage or amount of the available training data as the initial training data 110. In general, the initial training period trains the parameters of the initial computer model 100 such that the computer model's predicted classes have learned substantially from the initial training data and are improved relative to initialization of the computer model [wherein the classes of the predetermine classification represent].
Para 0034 line 21-36, As shown in FIG. 2, the region may comprise a small portion of the space 200 as a whole. In this example, the space 200 includes a table, chairs, a toy, a couch, and a lamp. In supervised learning data sets, the space 200 may also have associated labels which the computer model 220 attempts to learn, for example indicating which portions of the space 200 constitute the various objects. As depicted in the example of FIG. 2, the region 210 evaluated by the computer model 220 is a part of the space 200 including parts of both the couch and the lamp in the space 200. When the computer model 220 is applied to this region 210, the class predictions 230 include the highest likelihood that the region should be classified as a couch, followed by a lamp, and then a table. In many computer models, the class predictions 230, as output by the model, represent percentage predictions that together sum to 1.
Para 0042 line 1-6 and line 37-43, As discussed with respect to FIGS. 1-3, the current parameters of the model are applied to the training space 400 to generate a set of class predictions C and a confidence score f conf for each region i in the training space 400. For simplicity, FIG. 4 shows modified training space using two classes as examples: the "couch" and the "lamp" classes… In many applications, the evaluation of a region (e.g., a voxel or pixel) may include portions of the space near and around the region itself. For example, to successfully classify a region of 3x3 or 5x5 pixels in an image, the context of the surrounding pixels may also be included in the input to the computer model in predicting the region's classification [objects recognized in the image]).
Regarding claim 9, Thyagharajan and Kong teach the method according to claim 7.
Thyagharajan teaches wherein training examples of the training data set are selected using the determined priorities, and the selected training examples are included in a new, reduced training data set (Thyagharajan Para 0036, Using the class predictions, the system training the computer model determines 330 a confidence score for each region 340. In one embodiment, each region is assigned one confidence score, yielding a set of N confidence scores for the N regions within the input space. The confidence score may be determined in a variety of ways to reflect the relative certain/uncertainty of the model in its prediction for a given region. As one example data set, class predictions of [0.60, 0.15, 0.10] as the top-3 class predictions suggests more "confidence" in the highest-predicted class as expressed by the model compared to class predictions of [0.35, 0.30, 0.20]. While in both cases the same class may have been identified as most-likely, the distribution of predicted values in the second example is narrower and suggests the model's prediction of the first class could have more easily been changed by smaller changes in the input, and that there may be an opportunity to focus the model training on learning parameters to more sharply distinguish the classes. Thus, the confidence may be termed a "self-confidence" in that the confidence score can be determined based on the model's predictions as an unsupervised analysis of the class predictions. This also allows for automatic modification of training based on the confidence score without requiring human intervention to analyze or select classes or regions for further analysis
Para 0041, In the embodiment shown in FIG. 4, the training space is analyzed to identify classes to include or remove in a modified training space 440. In one embodiment, the regions associated with each class is identified, and the number ( or a proportion) of regions having low confidence scores is determined and used to modify the training space 400. Classes with more or fewer regions of relatively high or relatively low confidence may be identified and used to generate 430 the modified training space 440 and focus training on classes having more regions of low confidence [wherein training examples of the training data set are selected using the determined priorities]
Para 0042. As discussed further in FIG. 5, the regions associated with the known classes (e.g., the ground truth classes) are grouped into subsets, such that the regions of each type of class/ object can be evaluated… The designation to a group may thus determine how the training space is modified with the regions belonging to the class. In one embodiment, the "high-confidence group" regions are removed from the modified training space 440. In addition, the low-confidence group 450 (e.g., the "couch" group) may be included in the modified training space 440 thus includes groups of regions with low confidence, while removing other portions of the training space. In an additional configuration, while the high-confidence groups may be removed, the areas around the low-confidence group 450 may be added to the modified training space 440 as padded regions 460 around the low-confidence group 450. In many applications, the evaluation of a region (e.g., a voxel or pixel) may include portions of the space near and around the region itself. For example, to successfully classify a region of 3x3 or 5x5 pixels in an image, the context of the surrounding pixels may also be included in the input to the computer model in predicting the region's classification. [and the selected training examples are included in a new, reduced training data set]. (Examine Note: The systems includes in the new training base areas that are of low confidence for further training with examples with low-confidence using a reduce training dataset)).
Regarding claim 12, as best understood based on the 112(b) issues mentioned above, Thyagharajan and Kong teach the method according to claim 9.
Thyagharajan teaches wherein the classifier is re-trained and/or further trained with the new, reduced training data set (Thyagharajan FIG.4,
PNG
media_image8.png
789
1103
media_image8.png
Greyscale
(Examiner Note: The training space includes multiple sample and the training space is modified to only include samples of low confidence, thus a new reduce training data set is used for training.)
Para 0065 line 1-17, FIG. 7 shows an example flow for training a multi-label classifier using self-confidence according to one embodiment. As shown in FIG. 7, the computer model maybe initially trained 700 during an initial training period to initialize the class predictions to be further refined based on the model's confidence. To further train the model, a training space including regions is identified 710 and the model is applied to the regions to determine class predictions and a respective confidence score for each region. Using the confidence scores, the model may be further trained a subsequent period based on the confidence scores as discussed in detail above. As two examples, a training space may be modified 732 to include low-confidence region subsets and may exclude high-confidence region subsets from the modified training space, such that the modified training space may emphasize further training in areas of low confidence [wherein the classifier is re-trained and/or further trained]).
Claim(s) 6 are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan in view of Kong and further in view of Makhzani, Alireza, and Brendan J. Frey. "Pixelgan autoencoders." Advances in neural information processing systems 30 (2017) (“Makhzani”).
Regarding claim 6, Thyagharajan and Kong teach the method according to claim 1.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Thyagharajan does not explicitly teach wherein at least one of the modifications is generated by: converting the training example into a representation with reduced dimensionality using an encoder of an encoder/decoder assembly trained as an autoencoder, drawing a sample from a neighborhood of the representation, and converting the sample into the modification using a decoder of the encoder/decoder assembly.
However Makhzani teaches wherein at least one of the modifications is generated by: converting the training example into a representation with reduced dimensionality using an encoder of an encoder/decoder assembly trained as an autoencoder, drawing a sample from a neighborhood of the representation (Makhzani page 6 Figure 4,
PNG
media_image9.png
263
620
media_image9.png
Greyscale
[converting the training example into a representation with reduced dimensionality using an encoder of an encoder/decoder assembly trained as an autoencoder,]
Page 9, 4 Learning Cross-Domain Relations with PixelGAN Autoencoders para 3, We can adopt the ODM idea for semi-supervised learning by assuming D1 is the image domain and D2 is the label domain. Independent samples of D1 and D2 correspond to samples from the data distribution pdata(x) and the categorical distribution. The function F = q(yjx) can be parametrized by a neural network that is trained to satisfy the ODM cost function by matching the aggregated distribution q(y) = R q(yjx)pdata(x)dx to the categorical distribution using adversarial training. The few labeled examples are used to further train F to satisfy F(x) = y. However, as explained above, the problem with this method is that the network can learn to generate the categorical distribution by ignoring some part of the input distribution. The AAE solves this problem by adding an inverse mapping from the categorical distribution to the data distribution. However, the main drawback of the AAE architecture is that due to the reconstruction term, the latent representation now has to model all the underlying factors of variation in the image. For example, in the semi-supervised AAE architecture [6], while we are only interested in the one-hot label representation to do semi-supervised learning, we also need to infer the style of the image so that we can have a lossless reconstruction of the image. The PixelGAN autoencoder solves this problem by enabling the encoder to only infer the factor of variation that we are interested in (i.e., label information), while the remaining structure of the input (i.e., style information) is automatically captured by the autoregressive decoder [drawing a sample from a neighborhood of the representation]),
and converting the sample into the modification using a decoder of the encoder/decoder assembly (Makhzani page 6 -7
Figure 6,
PNG
media_image10.png
351
937
media_image10.png
Greyscale
2.2 PixelGAN Autoencoders with Categorical Priors para 5
Once the PixelGAN autoencoder is trained, its encoder can be used for clustering new points and its decoder can be used to generate samples from each cluster [using a decoder of the encoder/decoder assembly]. Figure 6 illustrates the samples of the PixelGAN autoencoder trained on the full MNIST dataset. The number of clusters is set to be 30 and each row corresponds to the conditional samples of one of the clusters (only 16 are shown). We can see that the discrete latent code of the network has learnt discrete factors of variation such as class label information and some discrete style information. For example digit 1s are put in different clusters based on how much tilted they are. The network is also assigning different clusters to digit 2s (based on whether they have a loop) and digit 7s (based on whether they have a dash in the middle). In Section 3, we will show that by using the encoder of this network, we can obtain about 5% error rate in classifying digits in an unsupervised fashion, just by matching each cluster to a digit type [and converting the sample into the modification ]. (i.e. the decoder generates modified samples based on the sample laten space as clustered by the encoder)).
Thyagharajan and Makhzani are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Thyagharajan in view of Makhzani to incorporate an autoencoder in classification. Doing so to improve classification in an unsupervised fashion (Makhzani page 7 2.2 PixelGAN Autoencoders with Categorical Priors para 5 line 6-10, For example digit 1s are put in different clusters based on how much tilted they are. The network is also assigning different clusters to digit 2s (based on whether they have a loop) and digit 7s (based on whether they have a dash in the middle). In Section 3, we will show that by using the encoder of this network, we can obtain about 5% error rate in classifying digits in an unsupervised fashion, just by matching each cluster to a digit type.).
Claim(s) 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan in view of Kong and further in view of Song et al. (WO2020219165) (“Song”).
Regarding claim 10, Thyagharajan and Kong teach the method according to claim 9.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Thyagharajan teaches wherein: training examples of the training data set whose priorities satisfy a predetermined criterion are selected (Thyagharajan Para 0042, As discussed with respect to FIGS. 1-3, the current parameters of the model are applied to the training space 400 to generate a set of class predictions C and a confidence score f each region i in the training space 400. For conform simplicity, FIG. 4 shows modified training space using two.
Para 0046 line 7-16, For example, the threshold may be set within the 20-80th percentile of confidence scores among the regions in the particular training space, or among the regions across training spaces in a wider training data set. After determining whether each region is relatively high or low confidence, in one embodiment a ratio of the low-confidence to high confidence regions is determined and used as the determined subset confidence metric 530. In various configurations, other methods for characterizing the confidence scores of the regions within a subset may be used.
Para 0048, Using the assignment of region subsets to the group of low-confidence subsets 540 or the group of high-confidence subsets 550, the modified training space 560 is generated as discussed at FIG. 4, for example to remove high-confidence subsets 550 and keep low-confidence subsets 540 with added padding regions [training examples of the training data set].
classes as examples: the "couch" and the "lamp" classes.
PNG
media_image11.png
420
990
media_image11.png
Greyscale
[whose priorities satisfy a predetermined criterion are selected,]),
However Thyagharajan does not explicitly teach and further training examples are randomly selected from the training data set and are included in the new, reduced data set.
However Song teaches and further training examples are randomly selected from the training data set and are included in the new, reduced data set (Song Para 0043, Systems, methods, and articles of manufacture for determining tissue or cell morphology classifications or regressions based on whole slide images (WSis), e.g., whole-slide images of hematoxylin and eosin (H&E)-stained biopsy tissue sections, are described herein. The various embodiments provide for a classifier model to be trained to determine a WSI-level tissue and/or cell morphology classification or regression using deep learning methods based on a limited set of training pathology slide images. Thus, the limited number of available whole slide images due to a lack of patient data sharing, and regulations preventing the same, can be overcome by methods that require a relatively limited amount (~100s) of labeled whole slide images for training purposes. The various embodiments herein do not require detailed annotation of WSis. Further, the embodiments herein overcome various storage and processing challenges of using typically very large whole slide images (e.g., 100,000xlO0,000 pixels or more) for deep-learning applications due to techniques for randomly generating fixed-size maps that reduce the size of the training dataset [and are included in the new, reduced data set.].
Para 0048 line 11-22, Fixed-size feature maps 1-N
502, 504, 506 may be (256, 256, 512) or (224, 224, 512) in size and may include randomly selected and/or randomly arranged feature map patches. For example, a varied-size feature map may be generated for each of the plurality of training WSis by generating a grid of patches for the training WSI, segmenting the training WSI into tissue and non-tissue areas, and converting patches comprising the tissue areas into tensors, e.g., multidimensional descriptive vectors comprising RGB components. At least one bounding box may be generated based on the patches comprising the tissue areas and segmented into feature map patches. A fixed-size feature map for processing, e.g., each of fixed-size feature maps 1-N 502, 504, 506, may be generated based on at least a subset of the feature map patches, which may be randomly selected and/or arranged randomly within the fixed-size feature map [and further training examples are randomly selected from the training data set]).
Thyagharajan and Song are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Thyagharajan in view of Song to select training examples randomly from the train data set. Doing so to overcome storage and processing challenges of large images (Song para 0043, Song Para 0043, Systems, methods, and articles of manufacture for determining tissue or cell morphology classifications or regressions based on whole slide images (WSis), e.g., whole-slide images of hematoxylin and eosin (H&E)-stained biopsy tissue sections, are described herein. The various embodiments provide for a classifier model to be trained to determine a WSI-level tissue and/or cell morphology classification or regression using deep learning methods based on a limited set of training pathology slide images. Thus, the limited number of available whole slide images due to a lack of patient data sharing, and regulations preventing the same, can be overcome by methods that require a relatively limited amount (~100s) of labeled whole slide images for training purposes. The various embodiments herein do not require detailed annotation of WSis. Further, the embodiments herein overcome various storage and processing challenges of using typically very large whole slide images (e.g., 100,000 x 100,000 pixels or more) for deep-learning applications due to techniques for randomly generating fixed-size maps that reduce the size of the training dataset.)
Regarding claim 11, Thyagharajan and Kong teach the method according to claim 9.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Thyagharajan and Song are combined with the same rationale used in claim 10.
Song teaches wherein a more balanced distribution of numbers of the training examples over the available classes is established in the new, reduced training data set than is present in the training data set (Song Para 0047 FIG. 4 illustrates a graphical representation of a fixed-size feature map in accordance with an embodiment. Fixed-size feature map 400 is generated based on bounding boxes 302, 304, 306, and 308. For example, fixed-size feature map 400 may be one of a (256, 256, 512) feature map (including RGB), a (224, 224, 512) feature map (including RGB) or another fixed size that may correspond to, e.g., a deep-learning classifier model fixed-size input. In an embodiment, fixed-size feature map 400 is generated based on at least a subset of the feature map patches, e.g., feature map patches 402, 404, 406, and 408, which may be randomly selected and/or arranged randomly within fixed-size feature map 400. Further, each time fixed-size feature map 400 is generated based on bounding boxes 302, 304, 306, and 308, the locations of the subset of the feature map patches may be different within the fixed-size feature map. In some embodiments, the subset of the feature map patches may be randomly selected to define cancer-enriched areas or to summarize tumor content within a training WSI.
Para 0048, FIG. 5 illustrates a block diagram of example operations for determining tissue or cell morphology classifications or regressions based on whole slide images in accordance with an embodiment [wherein a more balanced distribution of numbers of the training examples over the available classes]. In system 500, a deep learning neural network model trained using fixed-size feature maps that allows for the analysis of WSI characteristics, e.g., morphology structures from several microns to several millimeters in size. In an embodiment, a plurality of training WSis (e.g., lung adeno and squamous carcinoma diagnostic whole-slide images obtained from TCGA, LUAD, or LUSC data sources) may be used to generate fixed-size feature maps, e.g., fixed-size feature maps 1-N 502, 504, 506, that reduce the size of the training dataset for further processing. For example, each of the plurality of training WSis may correspond to a different patient. Fixed-size feature maps 1-N 502, 504, 506 may be (256, 256, 512) or (224, 224, 512) in size and may include randomly selected and/or randomly arranged feature map patches. For example, a varied-size feature map may be generated for each of the plurality of training WSis by generating a grid of patches for the training WSI, segmenting the training WSI into tissue and non-tissue areas, and converting patches comprising the tissue areas into tensors, e.g., multidimensional descriptive vectors comprising RGB components [n the new, reduced training data set than is present in the training data set].).
Claim(s) 13 are rejected under 35 U.S.C. 103 as being unpatentable over Thyagharajan in view of Kong and further in view of Mercep et al. (US20180314921A1) (“Mercep”).
Regarding claim 13, Thyagharajan and Kong teach the method according to claim 12.
Thyagharajan and Kong are combined with the same rationale used in claim 1 and analogous claims 14 and 15.
Thyagharajan does not explicitly teach further comprising: supplying measurement data recorded by at least one sensor to the re-trained and/or further trained classifier; determining a control signal from output of the classifier; and controlling, using the control signal: a vehicle, and/or an area monitoring system and/or a quality control system and/or a medical imaging system.
However Mercep Teaches further comprising: supplying measurement data recorded by at least one sensor to the re-trained and/or further trained classifier; determining a control signal from output of the classifier; and controlling, using the control signal: a vehicle, and/or an area monitoring system and/or a quality control system and/or a medical imaging system (Mercep para 0005, The applications in the advanced driver assistance systems or the autonomous driving systems can utilize the list of objects received from their corresponding sensors and, in some cases, the associated confidence levels of their detection, to implement automated safety and/or driving functionality. For example, when a RADAR sensor in the front of a vehicle provides the advanced driver assistance system in the vehicle a list having an object in a current path of the vehicle, the application corresponding to front-end collision in the advanced driver assistance system can provide a warning to the driver of the vehicle or control vehicle in order to avoid a collision with the object.
Para 0034, The autonomous driving system 100 can include a vehicle control system 130 to receive the control signals 131 from the driving functionality system 120. The vehicle control system 130 can include mechanisms to control operation of the vehicle, for example by controlling different functions of the vehicle, such as braking, acceleration, steering, parking brake, transmission, user interfaces, warning systems, or the like, in response to the control signals [and controlling, using the control signal: a vehicle,].
Para 0060, The management system 410 can select one or more of the object models to utilize in the generation of the classification 406 for the sensor measurement data 401. The management system 410 can prepare the sensor measurement data 401 for comparison to the selected object models, and direct the graph system 420 via graph control signaling 414 to apply the selected object models to the prepared version of the sensor measurement data 401. The graph system 420 can generate at least one match distance 415 based on the application of the prepared version of the sensor measurement data 401 to the selected object models, and the management system 410 can generate the classification 406 for the sensor measurement data 401 based, at least in part, on the match distance 415 from the graph system 420 [determining a control signal from output of the classifier;].
Para 0095, The labeling system 700 can correlate different sets of the sensor measurement data 701 to each other, for example, based on relationships between detection events 703 associated with the sets of the sensor measurement data 701. The labeling system 700 can analyze the classifications 702 of the correlated sets of the sensor measurement data 701 and selectively re-label the sensor measurement data 701 with different classifications or modified confidence levels 740, for example, generating re-labeled sensor measurement data 711. The labeling system 700 can output the re-labeled sensor measurement data 711, which, as will be discussed below in greater detail, can be utilized to build or modify one or more classification graphs and retrain the machine learning classifier. In some embodiments, the labeling system 700 can be included in the machine learning classifier, or located externally from a classification system [supplying measurement data recorded by at least one sensor to the re-trained and/or further trained classifier;]).
Thyagharajan and Mercep are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Thyagharajan in view of Mercep to supply measurement data from sensors. Doing so to retrain the machine learning classifier (Mercep Para 0095, The labeling system 700 can correlate different sets of the sensor measurement data 701 to each other, for example, based on relationships between detection events 703 associated with the sets of the sensor measurement data 701. The labeling system 700 can analyze the classifications 702 of the correlated sets of the sensor measurement data 701 and selectively re-label the sensor measurement data 701 with different classifications or modified confidence levels 740, for example, generating re-labeled sensor measurement data 711. The labeling system 700 can output the re-labeled sensor measurement data 711, which, as will be discussed below in greater detail, can be utilized to build or modify one or more classification graphs and retrain the machine learning classifier. In some embodiments, the labeling system 700 can be included in the machine learning classifier, or located externally from a classification system.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFREDO CAMPOS whose telephone number is (571)272-4504. The examiner can normally be reached 7:00 - 4:00 pm M - F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALFREDO CAMPOS/Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129