DETAILED ACTION
This Action is responsive to claims filed 11/03/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-3, 6-7, 10-12, 15-17, and 19 have been amended. Claims 1-20 are pending.
Response to Arguments
Applicant's arguments, see Pages 11-15, filed 11/03/2025, regarding the 35 U.S.C. 101 Rejection of Claims 1-20 have been fully considered but they are not persuasive.
The Applicant argues the independent claims integrate into a practical application by way of recited improvements to the functioning of a computer or other technological field. The Examiner respectfully disagrees with the Applicant. The amount of data processed in question is irrelevant to the BRI of a limitation falling under the mental process category. As presently drafted, the limitations of “identifying…”, “generating…”, “transforming…”, “replacing…”, “generating...”, and “mapping…” do not recite sufficient structure or detailed implementation precluding them from being performed by a human mind or with the aid of pen and paper. The recitation of a computing environment is highly general, a generic computer and generic neural network perform generic functions or generate output. As presently drafted, the specific improvement the Applicant alleges on Page 14 of the arguments is a direct result of the application of a series of algorithmic data manipulation steps represented by the aforementioned interpretable mental process limitations. Per MPEP 2106.05(a), the specific improvement cannot come from the abstract idea itself. See the updated 35 U.S.C. 101 Rejection below.
Applicant's arguments, see Pages 15-17, filed 11/03/2025, with respect to the rejection(s) of claim(s) 1-2, 4, 7-11, 13, 15-16, and 18-20 under 35 U.S.C. 102(a)(2) have been fully considered but they are not persuasive.
The inclusion of the “scalar” limitation on the type of data operated on is broad and continues to be read on by Modarresi (See Figures 3, 5, and 9, at least), given the BRI of “scalar tabular data” is merely a table with single values in each data entry. Additionally, although the Applicant’s arguments allege Modarresi dos not teach projecting the input data into a dimensionality higher than the input data, which the Examiner agrees with, the claims as presently drafted themselves do not require the projection to be higher than the input data. Therefore, Modarresi’s method of taking input data, encoding it into a lower dimensional state, and decoding into a higher-dimensional state (in this case, back to the original input dimensionality) continues to broadly read on the independent claims, as presently drafted. See the updated 35 U.S.C. 102(a)(2) rejection below.
Applicant's arguments with respect to the remaining prior art rejections under 35 U.S.C. 103 have been fully considered but they are not persuasive.
See above how the Examiner contends the 35 U.S.C. 102(a)(2) rejection of the independent claims should be upheld in light of the present amendments. In the absence of further arguments regarding the combination of Modarresi and Boyle, the Examiner contends the 35 U.S.C. 103 rejection should be upheld as well. See the updated 35 U.S.C. 103 Rejection below.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1:
Claims 1-9 recite a computer-implemented method, which falls under the statutory category of a process. Claims 10-15 recite a non-transitory computer-readable medium, which falls under the statutory category of a manufacture. Claims 16-20 recite a system, which falls under the statutory category of a machine.
Step 2A – Prong 1:
Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “identifying a tabular data set for a neural network comprising measured scalar tabular data values and placeholder tabular data values;”, “generating a scalar transformed tabular data set in a scalar feature space having a single dimension…”, “transforming the measured scalar tabular data values to a neural network value range based on a distribution of the measured tabular data values;”, “and replacing the placeholder tabular data values with a constant scalar value within the neural network value range;”, and “generating a high-dimensionality tabular data set in a high-dimensional machine learning feature space by mapping…scalar numerical values from the scalar transformed tabular data set in the scalar feature space having the single dimension to high-dimensional feature vectors within the high-dimensional machine learning space;” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper.
Identifying a dataset is practically performed within the human mind. Generating a transformed data set is practically performed within the human mind or with the aid of pen and paper. Transforming a dataset into a range of values is practically performed within the human mind or with the aid of pen and paper. Replacing values with constant values within a range is practically performed within the human mind or with the aid of pen and paper. Generating a dataset by mapping numerical values is practically performed within the human mind or with the aid of pencil and paper.
Step 2A – Prong 2:
The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “A computer-implemented method”, “a tabular data set”, and “data values” are recognized as generic computer components recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
The additional elements of “a neural network”, “a neural network value range”, “a neural network projection layer”, and “a prediction” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements recited in the limitations “utilizing a neural network projection layer,” and “generating a prediction from the high-dimensionality tabular data set within the high-dimensional machine learning feature space utilizing the neural network.” are found to be mere instructions to apply the abstract idea of generating and transforming the dataset(s) (see MPEP 2106.05(f) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
Step 2B:
The only limitation on the performance of the described method is a limitation reciting “A computer-implemented method”, “a tabular data set”, and “data values” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)).
The additional elements of “a neural network”, “a neural network value range”, “a neural network projection layer”, and “a prediction” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements recited in the limitations “utilizing a neural network projection layer,” and “generating a prediction from the high-dimensionality tabular data set within the high-dimensional machine learning feature space utilizing the neural network.” are found to be mere instructions to apply the abstract idea (See MPEP 2106.05(f) indicating mere instructions to apply an abstract idea does not recite significantly more).
Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 10 and 16.
Claim 10 recites similar limitations to claim 1, with the exception of “A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer system to:” (generic computer components), therefore both claims are similarly rejected.
Claim 16 recites similar limitations to claim 1, with the exception of “A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to:” (generic computer components), therefore both claims are similarly rejected.
Dependent Claims:
Claim 2 (claim 11) recites abstract idea mental process steps (“…generate normalized feature vectors from the high-dimensional feature vectors within the high-dimensional machine learning feature space.”) and instructions to apply the abstract idea (“utilizing a normalization layer…”) (See MPEP 2106.05(f)).
Claim 3 (claims 12 and 17) recites abstract idea mental process steps (“dividing the high-dimensional feature vectors within the high-dimensional machine learning feature space by L2 norms of the high-dimensional feature vectors.”)
Claim 4 (claim 13 and 18) recites instructions to apply the abstract idea of claim 1 (“training the neural network by: determining a measure of loss by comparing the prediction to a training ground truth utilizing a loss function; and modifying parameters of the neural network utilizing the measure of loss.”)
Claim 5 (claim 14) recites instructions to apply the abstract idea of claims 1 and 4 (“training the neural network utilizing dropout regularization and L2 regularization”).
Claim 6 recites abstract idea mental process steps (“determining a mean of the measured scalar tabular data values and a deviation metric of the measured scalar tabular data values;” and “and transforming the measured scalar tabular data values to the neural network value range based on the mean and the deviation metric.”)
Claim 7 (claims 15 and 19) recites an abstract idea mental process steps (“replacing the placeholder tabular data values with a transformed mean metric for the measured scalar tabular data values within the neural network value range.”).
Claim 8 recites abstract idea mental process steps (“determining the placeholder tabular data values by identifying unmeasured, proxy tabular data values within the tabular data set.”)
Claim 9 (claim 20) recites instructions to apply the abstract idea of claim 1 (“utilizing the neural network to generate a predicted client disposition from client features corresponding to a client device participating in an automated client interaction, and further comprising, utilizing the predicted client disposition to generate an automated interaction response for the client device.”)
The additional elements of claim 9 are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
Claim Rejections - 35 USC § 102
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-2, 4, 7-11, 13, 15-16, 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Modarresi et al. (US 11,770,571 B2), hereinafter Modarresi.
In regards to claim 1: The present invention claims: “A computer-implemented method comprising: identifying a tabular data set for a neural network comprising measured scalar tabular data values and placeholder tabular data values;” Modarresi teaches taking an incomplete matrix dataset (Fig. 3, for example) to be completed for tasks requiring complete data (Column 3, Lines 1-5). The BRI of “scalar tabular data” is merely a table with single values in each data entry, which Modarresi teaches (Figures 3, 5, and 9, at least).
“generating a scalar transformed tabular data set in a scalar feature space having a single dimension by: transforming the measured tabular data values to a neural network value range based on a distribution of the measured scalar tabular data values;” Modarresi teaches “The matrix manager system encodes the incomplete matrix and thus generates an encoded incomplete matrix. As part of this encoding, the matrix manager system determines whether each attribute represented by the incomplete matrix corresponds to a numerical or a categorical attribute. This is because the matrix manager system encodes numerical attributes differently from categorical attributes. Broadly speaking, the matrix manager system normalizes the known values of a numerical attribute, e.g., using a normalization technique such as Min Max Scaling. In contrast, the matrix manager system categorically encodes the known values of a categorical attribute, so that the categorically encoded attributes have a value ( e.g., '0' or ' 1 ') capable of serving as input to the machine-learning model. In one or more implementations, the matrix manager system categorically encodes the categorical attributes using an encoding technique, such as One-Hot encoding.” (Column 3, Lines 47-63, mapping the use of Min-Max Scaling of the numerical values to “based on the distribution” since Min-Max Scaling maintains the distribution).
“and replacing the placeholder tabular data values with a constant scalar value within the neural network value range;” Modarresi teaches “Once encoded, the encoded incomplete matrix is used as input to the machine-learning model, along with masks indicative of the known and unknown values encoded in the encoded incomplete matrix, the encoded attributes corresponding to numerical attributes, and the encoded attributes corresponding to categorical attributes. This information is used to train the machine-learning model to impute values for the unknown values.” (Columns 3-4, Lines 64-67 and 1-4, respectively, mapping imputing the data to replacing placeholder values). See Figures 6A and 6B, and Figures 11 and 12 for transforming an incomplete matrix into a complete matrix with machine learning.
“generating a high-dimensionality tabular data set in a high-dimensional machine learning feature space by mapping, utilizing a neural network projection layer, scalar numerical values from the scalar transformed tabular data set in the scalar feature space having the single dimension to high-dimensional feature vectors within the high-dimensional machine learning feature space;” Modarresi teaches “In general, the autoencoder manager module 404 configures the hidden layers 706, 708, 710 to have fewer numbers of nodes than the input and output layers 702, 704 to project the data encoded in the cells of the encoded basis matrix 412 to a lower dimension. By projecting this data to a lower dimension, the autoencoder manager module 404 can identify the latent factors of the dataset, e.g., the values entered into the basis matrix.” (Column 17, Lines 29-39) and “Here, the term W nz represents the weights of the decoder 2 718 learned from the network and the term B nz represents the bias of the decoder 2 718 learned from the network. Further, the term X' represents the output of the autoencoder 420. In one or more implementations, X' represents a vector having a same dimensionality as the input to the input layer 702, e.g., a value for each colunm of the encoded basis matrix 412. Moreover, the autoencoder manager module 404 uses this output to train the autoencoder 420, e.g., by comparing the output values to the input values using one or more cost functions.” (Column 18, Lines 53-63, mapping the encoder layers reducing the dimensionality to a first dimensionality, which the decoder then projects back into a higher dimensionality of the original input)
“and generating a prediction from the high-dimensionality tabular data set within the high-dimensional machine learning feature space utilizing the neural network.” Modarresi teaches “Moreover, the autoencoder manager module 404 uses this output to train the autoencoder 420, e.g., by comparing the output values to the input values using one or more cost functions.” (Column 18, Lines 60-63). See also Figure 9 for a completed matrix with predicted values imputed into the matrix.
In regards to claim 2: The present invention claims: “wherein generating the scalar transformed tabular data set further comprises utilizing a normalization layer to generate normalized feature vectors from the high-dimensional feature vectors within the high-dimensional machine learning feature space.” See above where the autoencoder of Modarresi takes the input data of one dimensionality (the higher, in this case) and normalizes it when making the transformed dataset before predictions are made for the incomplete data. (Columns 3 and 17).
In regards to claim 4: The present invention claims: “training the neural network by: determining a measure of loss by comparing the prediction to a training ground truth utilizing a loss function; and modifying parameters of the neural network utilizing the measure of loss.” Modarresi teaches “The recommendation provision system 106 can use any type of machine-learning techniques capable of generating recommendations or predicting analytics based on completed matrices. According to various implementations, such a machine-learning model uses supervised learning, unsupervised learning, or reinforcement learning. For example, the machine-learning model can include, but is not limited to, decision trees, support vector machines, linear regression, logistic regression, Bayesian networks, random forest learning, dimensionality reduction algorithms, boosting algorithms, artificial neural networks (e.g., fully-connected neural networks, deep convolutional neural networks, or recurrent neural networks), deep learning, etc. In any case, the recommendation provision system 106 may use machine-learning techniques to continually train and update the machine-learning model (or, in other words, to update a trained machine-learning model) to accurately determine recommendations and analytic predictions.” (Column 6, Lines 46-63) (Examiner’s Note: Given the generality of claim 4, the Examiner finds the cited passage of Modarresi sufficiently reads on utilization of ground truth data, loss functions, and backpropagation for updating model parameters as inherent to machine learning techniques. One of Ordinary skill in the art at the time of the Applicant’s filing would have known of these techniques, and it would have been reasonable to use them in a system such as Modarresi’s).
In regards to claim 7: The present invention claims: “wherein replacing the placeholder tabular data values with the constant scalar value within the neural network value range comprises replacing the placeholder tabular data values with a transformed mean metric for the measured scalar tabular data values within the neural network value range.” While Modarresi does not explicitly use this method in their matrix completion, Modarresi does teach replacing empty or placeholder with an attribute mean or similar metric (Column 3, Lines 12-20). This demonstrates that such a method would have been known in the art at the time of Modarresi’s filing, although Modarresi does highlight deficiencies with such a method.
In regards to claim 8: The present invention claims: “determining the placeholder tabular data values by identifying unmeasured, proxy tabular data values within the tabular data set.” Modarresi’s method is designed to impute predicted data into incomplete matrices with missing (unmeasured) values (Summary, Figure 3).
In regards to claim 9: The present invention claims: “wherein utilizing the neural network to generate the prediction comprises, utilizing the neural network to generate a predicted client disposition from client features corresponding to a client device participating in an automated client interaction, and further comprising, utilizing the predicted client disposition to generate an automated interaction response for the client device.” Modarresi teaches “By way of example, a video recommendation system may recommend digital videos to client device users, in part, by leveraging a data matrix. Such a matrix can represent the client device users and a catalog of digital videos and also include values indicative of user ratings of the digital videos. In this scenario it is unlikely, however, that each client device user will have viewed (and also rated) each digital video of the catalog. Based on this, at least some of the user ratings for the digital videos will be unknown.” (Column 1, Lines 12-21). Modarresi is generally directed towards making content recommendations based on incomplete data (mapping generally to using “a predicted client disposition” to “generate an automated interaction response”).
In regards to claims 10-11, 13, and 15: Claims 10-11, 13, and 15 recite similar limitations to claims 1-2, 4, and 7, with the exception of “A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer system to:” of claim 10; therefore, both sets of claims are similarly rejected.
In regards to claims 16 and 18-20: Claims 16 and 18-20 recite similar limitations to claims 1, 4, 7, and 9, with the exception of “A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to:” of claim 16; therefore, both sets of claims are similarly rejected.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 3, 5, 12, 14, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Modarresi et al. (US 11,770,571 B2), hereinafter Modarresi; and Boyle (Searching For Phenotypes Of Sepsis: An Application Of Machine Learning To Electronic Health Records, 2019), hereinafter Boyle.
In regards to claim 3: While Modarresi teaches ““The recommendation provision system 106 can use any type of machine-learning techniques capable of generating recommendations or predicting analytics based on completed matrices. According to various implementations, such a machine-learning model uses supervised learning, unsupervised learning, or reinforcement learning. For example, the machine-learning model can include, but is not limited to, decision trees, support vector machines, linear regression, logistic regression, Bayesian networks, random forest learning, dimensionality reduction algorithms, boosting algorithms, artificial neural networks (e.g., fully-connected neural networks, deep convolutional neural networks, or recurrent neural networks), deep learning, etc. In any case, the recommendation provision system 106 may use machine-learning techniques to continually train and update the machine-learning model (or, in other words, to update a trained machine-learning model) to accurately determine recommendations and analytic predictions.” (Column 6, Lines 46-63) which may reasonably include L2 Normalization, the Examiner feels the specificity of claim 3 “wherein utilizing the normalization layer to generate the normalized feature vectors comprises dividing the high-dimensional feature vectors within the high-dimensional machine learning feature space by L2 norms of the feature vectors.” is beyond the scope of Modarresi’s generic use of machine learning techniques. However, Boyle, in a similar field of endeavor of data imputation, references it directly on page 29, in finding optimal parameters and techniques for their own data imputation autoencoder.
Boyle demonstrates that, among other known machine learning techniques as listed by Modarresi, the use L2 Normalization to reduce high-dimensional data in a system like Modarresi’s would have been a combination of known techniques reasonable to one of ordinary skill in the art at the time of the Applicant’s filing.
In regards to claim 5: While Modarresi teaches ““The recommendation provision system 106 can use any type of machine-learning techniques capable of generating recommendations or predicting analytics based on completed matrices. According to various implementations, such a machine-learning model uses supervised learning, unsupervised learning, or reinforcement learning. For example, the machine-learning model can include, but is not limited to, decision trees, support vector machines, linear regression, logistic regression, Bayesian networks, random forest learning, dimensionality reduction algorithms, boosting algorithms, artificial neural networks (e.g., fully-connected neural networks, deep convolutional neural networks, or recurrent neural networks), deep learning, etc. In any case, the recommendation provision system 106 may use machine-learning techniques to continually train and update the machine-learning model (or, in other words, to update a trained machine-learning model) to accurately determine recommendations and analytic predictions.” (Column 6, Lines 46-63) which may reasonably include dropout and L2 regularization, the Examiner feels the specificity of claim 5 “training the neural network utilizing dropout regularization and L2 regularization.” is beyond the scope of Modarresi’s generic use of machine learning techniques. However, Boyle, in a similar field of endeavor of data imputation, references dropout and L2 regularization directly on page 29, in finding optimal parameters and techniques for their own data imputation autoencoder.
In regards to claim 6: The present invention claims: “wherein transforming the measured scalar tabular data values to the neural network value range based on the distribution of the measured scalar tabular data values comprises: determining a mean of the measured scalar tabular data values and a deviation metric of the measured tabular data values; and transforming the measured scalar tabular data values to the neural network value range based on the mean and the deviation metric.” Boyle teaches “The data was then randomly split into a training (90%) and validation set (10%). One of the risks of training a machine learning model is overfitting the training data so that the model “memorizes” the training data but generalizes to new data poorly. To evaluate the model’s generalizability, which is also a proxy for the degree to which it is learning a meaningful latent representation of the input data, the model is trained on one set of data but evaluated on another (48).
After splitting, each variable was zero-centered and scaled to unit variance by subtracting the mean and dividing by the standard deviation. This is common practice because many machine learning estimators behave badly if individual features do not resemble normally distributed data.” (Page 27).
It would have been reasonable and obvious to one of ordinary skill in the art at the time of the Applicant’s filing to utilize the mean and standard deviation of the data as Boyle suggests is beneficial and commonplace in a system such as Modarresi’s.
In regards to claims 12 and 14: Claims 12 and 14 recite similar limitations to claims 3 and 5, with the exception of “A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer system to:” inherited from claim 10; therefore, both sets of claims are similarly rejected.
In regards to claim 17: Claim 17 recites similar limitations to claim 3 with the exception of “A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to:” inherited from claim 16; therefore, both sets of claims are similarly rejected.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRIFFIN TANNER BEAN/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121