DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after July 18, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Regarding Claim 1:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites the abstract ideas of:
initializing, for a batch of data from the training survival dataset, an estimated probability distribution for the prediction model; This limitation is directed to the abstract idea of a mathematical concept, as initializing a probability distribution is analogous to a mathematical calculation and mathematical relationships (see MPEP 2106.04(a)(2) I. C.).
constructing a soft label, for each of the plurality of censored individuals, by shifting the estimated individual probability distribution for a respective one of the plurality of censored individuals by a predetermined value; This limitation is directed to the abstract idea of a mathematical concepts, as shifting the probability distribution is analogous to a mathematical calculation (see MPEP 2106.04(a)(2) I. C.).
generating a loss by summing, for each of the plurality of censored individuals, a weighted scoring rule using the soft labels and the individual probability distributions; This limitation is directed to the abstract idea of a mathematical concepts, as summing a weighted scoring rule using labels and probability distributions is analogous to a mathematical calculation based on mathematical relationships (see MPEP 2106.04(a)(2) I. C.).
modifying the estimated probability function based upon the loss; The modification of the probability function based on the loss is analogous to modifying values based on mathematical relationships, as the probability function (being a value) depends on the loss (another value (see MPEP 2106.04(a)(2) I. C.).
Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? – No, there are no additional elements that integrate the judicial exception into a practical application.
A computer-implemented method for training a prediction model for dynamic survival analysis of a training survival dataset representing a plurality of individuals. This limitation invokes a computer merely as a tool for performing an existing process [see MPEP 2106.05(f)(2)] and therefore fails to amount to significantly more than the judicial exception.
repeating the determining, the generating, the constructing, and the modifying until the loss is minimized. This limitation is directed to merely manipulating data in an iterative manner. Such activity constitutes insignificant extra-solution activity and therefore, it does not impose meaningful limits on the claim, considered an insignificant extra solution activity. (MPEP 2105(g)(2))
The survival dataset includes censored data. The limitation that the survival data includes censored data merely specifies the type or characteristic of the input data used by the mathematical model Specifying the type of data used in performing the abstract idea constitutes insignificant extra-solution activity and therefore does not integrate the judicial exception into a practical application (MPEP 2106.05(g))
Step 2B – Does the claim recite any additional elements that amount to significantly more than the judicial exception? – No, there are no additional elements that amount to significantly more than the judicial exception.
A computer-implemented method for training a prediction model for dynamic survival analysis of a training survival dataset representing a plurality of individuals. This limitation invokes a computer merely as a tool to train a model for performing an existing process [see MPEP 2106.05(f)(2)] and therefore fails to amount to significantly more than the judicial exception.
repeating the determining, the generating, the constructing, and the modifying until the loss is minimized. This limitation is proper to state it as well-understood, routine and conventional (WURC) since repeating the operations such as determining, generating… as being a clear court example of “Performing repetitive calculations” for being WURC per MPEP 2106.05 (d) II ii.
The survival dataset includes censored data. This limitation specifying that the survival data includes censored data does not amount to significantly more than the abstract idea. The limitation merely describes the type of data used and represents well-understand, routine, and conventional activity in the field of data processing (MPEP 2106.05(d))
Step 2A Prong Two and Step 2B:
Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. The claim is ineligible.
Regarding Claim 2:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites the abstract ideas of:
The method of claim 1, wherein the soft labels are constructed using an estimation of a probability of event occurrence and an estimation of a probability of survival function. This limitation is directed to the abstract idea of a mathematical concepts, as using the probabilities to construct the soft label is analogous to the mathematical calculation (see MPEP 2106.04(a)(2) I. C.). and the probability of survival function is analogous to mathematical formulas or equations (see MPEP 2106.04(a)(2) I. B.).
Regarding Claim 3:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites the abstract ideas of:
The method of claim 2, wherein the soft labels are constructed as a probability distribution in a form of a length-K vector. This limitation is directed to the abstract idea of a mathematical concepts, as constructing the labels as a form of vector is analogous to a mathematical calculation (see MPEP 2106.04(a)(2) I. C.)
Regarding Claim 4:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
The method of claim 1, wherein the scoring rule is a Bregman divergence. This limitation is directed to the abstract idea of a mathematical concepts, as Bregma divergence is a measure of difference between two points, which is analogous to a mathematical calculation (see MPEP 2106.04(a)(2) I. C.)
Regarding Claim 5:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites the abstract ideas of:
The method of claim 1, wherein the scoring rule is weighted by an estimated probability distribution of censoring time. This limitation is directed to the abstract idea of a mathematical concepts, as weighting by a probability distribution is analogous to the mathematical calculation (see MPEP 2106.04(a)(2) I. C.).
Regarding Claim 6:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites the abstract ideas of:
The method of claim 1, wherein the loss is determined to be minimized using a neural network. This limitation is directed to the abstract idea of a mathematical concepts, determining or minimizing a loss is analogous to the mathematical calculation (see MPEP 2106.04(a)(2) I. C.).
Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? – No, there are no additional elements that integrate the judicial exception into a practical application.
Using a neural network. The claim further recites the use of a neural network. However, the neural network merely provides instructions to apply the mathematical concept and therefore does not integrate the judicial exception into a practical application. The claim does not improve the functioning of a computer or another technology (MPEP 2016.04(d)(1)).
Step 2B – Does the claim recite any additional elements that amount to significantly more than the judicial exception? – No, there are no additional elements that amount to significantly more than the judicial exception.
Using a neural network. The additional elements, including the neural network, do not amount to significantly more than the abstract idea. The neural network represents well-understood, routine, and conventional activity previously known in the field of machine learning (MPEP 2106.05(d)).
Step 2A Prong Two and Step 2B:
Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. The claim is ineligible.
Regarding Claim 7:
Step 1 – Is the claim to a process, machine, manufacture, or composition of matter?
Yes
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites the abstract ideas of:
The prediction model is a neural network model of the neural network. This limitation is directed to the abstract idea of a mathematical concepts, as defining the prediction model as a NN model is analogous to a mathematical relation (see MPEP 2106.04(a)(2) I.A.)
The neural network model determines the individual estimated probability distributions. This limitation is directed to the abstract idea of a mathematical concepts, as using a neural network to determine a loss is analogous to the mathematical calculation (see MPEP 2106.04(a)(2) I. C.).
Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? – No, there are no additional elements that integrate the judicial exception into a practical application.
A/the neural network. The claim further recites the use of a neural network. However, the neural network merely provides instructions to apply the mathematical concept and therefore does not integrate the judicial exception into a practical application. The claim does not improve the functioning of a computer or another technology (MPEP 2016.04(d)(1)).
Step 2B – Does the claim recite any additional elements that amount to significantly more than the judicial exception? – No, there are no additional elements that amount to significantly more than the judicial exception.
A/the neural network. The additional elements, including the neural network, do not amount to significantly more than the abstract idea. The neural network represents well-understood, routine, and conventional activity previously known in the field of machine learning (MPEP 2106.05(d)).
Step 2A Prong Two and Step 2B:
Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. The claim is ineligible.
Regarding claims 8 – 14
Claims 8 - 14 recite analogous limitations to claims 1 - 7 (respectively) and therefore they are rejected on the same grounds as claims 1 - 7.
Regarding claims 15 – 19
Claims 15 - 19 recite analogous limitations to claims 1 - 5 (respectively) and therefore they are rejected on the same grounds as claims 1 - 5.
Regarding claims 20
Claims 20 recite analogous limitations to the combination of claim 6 and claim 7 (respectively) and therefore they are rejected on the same grounds as claim 6 and claim 7.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
Claim(s) 1 - 3, 5, 8 - 10, 12, 15 - 17, and 19 is/are rejected under 35 U.S.C. 102(a) (1) as being anticipated by ( Temporally-Consistent Survival Analysis , Lucas Maystre - hereinafter Maystre).
Referring to Claim 1, Maystre teaches a computer-implemented method for training a prediction model for dynamic survival analysis of a training survival dataset representing a plurality of individuals (See Maystre at [Abstract]:” We study survival analysis in the dynamic setting: We seek to model the time to an event of interest given sequences of states. Taking inspiration from temporal difference learning, a central idea in reinforcement learning, we develop algorithms that estimate a discrete-time survival model by exploiting a temporal-consistency condition.” Examiner interprets studying survival analysis in dynamic setting and seeking model as equivalent as claimed training a model for dynamic survival analysis.
initializing, for a batch of data from the training survival dataset, an estimated probability distribution for the prediction model; See Maystre at [page 2, section 2: Preliminaries]:” We consider sequences drawn from a Markov chain on a state space X, with initial distribution π0(x) and transition probabilities p(x’|x).” Examiner interprets sequences drawn from state space X as equivalent as initializing a probability distribution.
determining, for each of a plurality of censored individuals, an individual estimated probability distribution; see at Maystre at [page 3, 4th paragraph]:” In practice, we will be estimating survival from sequences sampled from the Markov chain. We assume that each sequence s = (x0, x1, . . . , xt) that we observe either a) ends as soon as it reaches the terminal state, or b) has not yet reached the terminal state. A sequence where xt ̸= ∅ is called right-censored. We call c = 1{xt ̸= ∅} the censoring indicator, and the index t of the last observed state is the time-to-event or censoring time. We collect observed sequence into a dataset D = {sn : n = 1, . . . ,N}, noting that sequences can be of different lengths. Given such a dataset, a natural choice for the horizon K is the length the longest sequence.” Examiner interprets each sequence as equivalent as each censored individual, and the values in different states of a sequence as equivalent as the probability distribution. Thus, Maystre teaches the limitation.
PNG
media_image1.png
290
844
media_image1.png
Greyscale
constructing a soft label, for each of the plurality of censored individuals, by shifting the estimated individual probability distribution for a respective one of the plurality of censored individuals by a predetermined value. See Maystre at [page 6] “Whereas the MLE regresses hard binary targets that are exclusively based on the observed time-to-event or censoring time, we regress a combination of hard targets (whether the sequence terminates at the next step, Line 3) and soft targets (predictions at the next state, Line 6).” Examiner interprets regressing a soft target as equivalent as constructing a soft label since it is well-established in the art that “soft target” and “soft label” are used interchangeably on probability distribution in the context of machine learning. Also see Maystre at [page 6, Algorithm 1] In line 2,the for-loop “for m=1,…,M do” Examiner interprets numbers 1…m as equivalent to the operations of determining for each of censored individuals. in line 6 ”ymk ← hΘ (k – 1 | x’m ) Based on predictions at x’m.” Examiner interprets ymk as the soft target, which was constructed by the hazard function hΘ , and x’m as equivalent as the predetermined value. Thus, Maystre teaches the limitation.
generating a loss by summing, for each of the plurality of censored individuals, a weighted scoring rule using the soft labels and the individual probability distributions; see Maystre at [page 6, Algorithm 1]: Examiner interprets the Cross-Entropy Loss function arg-min in line 8 as equivalent as the scoring rule to generate a loss by summing; wmk as equivalent as the weight, which is determined in line 7; ymk as equivalent as a soft label, which is used as a parameter of the arg-min function in line 8. Thus, Maystre teaches the limitation as claimed.
modifying the estimated probability function based upon the loss. See at Maystre at [page 3 Algorithm 1]: in line 8: after generating a loss by sum, a new parameter Θ will be determined. And the parameter Θ in function h (in line 6) and function S (in line 7) will be updated by this new Θ. Examiner interprets updating the parameter Θ as equivalent as modifying the function as claimed.
repeating the determining, the generating, the constructing, and the modifying until the loss is minimized; see Maystre at [page 6, Algorithm 1]: “1: repeat… 9: until Θ has converged”. Examiner interprets that the codes from line 2 to line 8 as equivalent as the operations of determining, generating, constructing and modifying, which are repeated until the system finds the converged parameter Θ, that makes the sum loss minimized.
the survival dataset includes censored data. See Maystre at [page 6]:” Datasets of sequences. In practice, we will be estimating survival from sequences sampled from the Markov chain. We assume that each sequence s = (x0, x1, . . . , xt) that we observe either a) ends as soon as it reaches the terminal state, or b) has not yet reached the terminal state. A sequence where xt ̸= ∅ is called right-censored. We call c = 1{xt ̸= ∅} the censoring indicator, and the index t of the last observed state is the time-to-event or censoring time. Examiner interprets the datasets of sequences with the censoring time as equivalent as the survival dataset as claimed.
Referring to Claim 2, Maystre teaches the soft labels are constructed using an estimation of a probability of event occurrence and an estimation of a probability of survival function. See at Maystre at [page 3]:” We describe a survival distribution by using three interrelated functions:
PNG
media_image2.png
92
596
media_image2.png
Greyscale
It is possible to express any one function in terms of any other”, Examiner interprets hazard function which is used to create the soft target can be expressed as a survival function equivalently. And see Maystre at [page 2 - 3]:” In this paper, we use the terminology of survival analysis and refer to T as the time-to-event. We call the probability distribution of T the survival distribution.” Examiner interprets that “the survival distribution” contains the probability of the event, which is used in the survival function S(k|x) Thus, Maystre teaches the limitation as claimed.
Referring to Claim 3, Maystre teaches the soft labels are constructed as a probability distribution in a form of a length-K vector. See Maystre at [page 6, Algorithm 1]: line 5 “for k=2,…, K do” and line 6 “ ymk <- hΘ (k -1 | x’m)”. Examiner interpreted k states of the soft label ymk as equivalent as a length – k vector
Referring to Claim 5, Maystre teaches the scoring rule is weighted by an estimated probability distribution of censoring time. See Maystre at [page 5 - 6, Algorithm 1]:
“It is instructive to compare the optimization problem on Line 8 to the maximum-likelihood estimator (1). Similarly to the MLE, our approach casts survival estimation as a weighted binary classification problem. Whereas the MLE regresses hard binary targets that are exclusively based on the observed time-to-event or censoring time, we regress a combination of hard targets (whether the sequence terminates at the
PNG
media_image1.png
290
844
media_image1.png
Greyscale
next step, Line 3) and soft targets (predictions at the next state, Line 6). Similarly, our weights are functions of observations and predictions at the next state.” Examiner interprets the function arg-min on Line 8 as equivalent as the soring rule, wmk as equivalent as the weight, and arg-min regresses hard binary targets based on the observed time to event or censoring time as equivalent as scoring rule weighted by an estimated probability distribution.
Referring to dependent Claims 8 and 15, these claims are rejected on the same basis as dependent claim 1, mutatis mutandis, since they are analogous claims.
Referring to dependent Claims 9 and 16, these claims are rejected on the same basis as dependent claim 2, mutatis mutandis, since they are analogous claims.
Referring to dependent Claims 10 and 17, these claims are rejected on the same basis as dependent claim 3, mutatis mutandis, since they are analogous claims.
Referring to dependent Claims 12 and 19, these claims are rejected on the same basis as dependent claim 5, mutatis mutandis, since they are analogous claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 4, 11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Maystre in view of Abernethy (A Characterization of Scoring Rules for Linear Properties by Abernethy- hereinafter Abernethy).
Referring to Claim 4, Maystre teaches the computer-implemented method of claim 1. However, it fails to teach:
The scoring rule is a Bregman divergence.
Abernethy teaches, in an analogous system:
The scoring rule is a Bregman divergence. See Abernethy at page27.2:” The central conclusion of the present paper is that any scoring rule for a linear property Γ must take the form of a Bregman divergence.” Examiner interprets Abernethy teaches using Bregman as a scoring rule.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Maystre with the above teachings of Abernethy by the scoring rule is a Bregman divergence. The modification would have been obvious because one of ordinary skill in the art would be motivated to make better prediction on market, as suggested by Abernethy at [abstract]:” A key conclusion is that any such scoring rule can be written in the form of a Bregman divergence for some convex function. We also apply our results to the design of prediction market mechanisms, showing a strong equivalence between scoring rules for linear properties and automated prediction market makers.”
Referring to dependent Claims 11 and 18, these claims are rejected on the same basis as dependent claim 4, mutatis mutandis, since they are analogous claims.
Claim(s) 6, 7, 13,14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Maystre in view of Cao (CN Pub No. CN 114757266 A - hereinafter Cao).
Referring to Claim 6, Maystre teaches the computer-implemented method of claim 1. However, it fails to teach:
the loss is determined to be minimized using a neural network.
Cao teaches, in an analogous system:
The loss is determined to be minimized using a neural network. See Cao at [0016]:” In network training, the loss value of the prediction model is calculated using an objective function. By continuously updating the parameters in the neural network model, the loss of the prediction model on the training dataset is minimized.” Examiner interprets minimizing the loss of prediction model as equivalent as using a neural network to determine the loss.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Maystre with the above teachings of Cao by the loss is determined using a neural network. The modification would have been obvious because one of ordinary skill in the art would be motivated to improve the accuracy of the prediction propagation source, as suggested by Cao at [0016]:” By adjusting the learning weights of high-energy events, the prediction model can be more effective and more accurate, thereby reducing the false negative rate of high-energy events, effectively improving the problem of data class imbalance, effectively accelerating the model convergence speed, and improving the model prediction accuracy.”
Referring to Claim 7, Maystre teaches the computer-implemented method of claim 1. However, it fails to teach:
The prediction model is a neural network model of the neural network, and the neural network model determines the individual estimated probability distributions.
Cao teaches, in an analogous system:
The prediction model is a neural network model of the neural network, and the neural network model determines the individual estimated probability distributions. See Cao at [0016]:” In network training, the loss value of the prediction model is calculated using an objective function. By continuously updating the parameters in the neural network model, the loss of the prediction model on the training dataset is minimized.” Examiner interprets this network training teaches the prediction model is a network model.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Maystre with the above teachings of Cao by the prediction model is a neural network model of the neural network. The modification would have been obvious because one of ordinary skill in the art would be motivated to improve the accuracy of the prediction propagation source, as suggested by Cao at [0016]:” By adjusting the learning weights of high-energy events, the prediction model can be more effective and more accurate, thereby reducing the false negative rate of high-energy events, effectively improving the problem of data class imbalance, effectively accelerating the model convergence speed, and improving the model prediction accuracy.”
Referring to dependent Claims 13, the claims is rejected on the same basis as dependent claim 6, mutatis mutandis, since they are analogous claims.
Referring to dependent Claims 14, the claims is rejected on the same basis as dependent claim 7, mutatis mutandis, since they are analogous claims.
Referring to dependent Claims 20, the claims is rejected on the same basis as the combination of dependent claim 6 and 7, mutatis mutandis, since they are analogous claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIAYUE MA whose telephone number is (571)272-9658. The examiner can normally be reached between 9 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jiayue Ma/
Examiner, Art Unit 2126
/DAVID YI/Supervisory Patent Examiner, Art Unit 2126