DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1 and 9 are objected to because of the following informalities: minor typo.
Claim 1 and 9 recite “rule-“ or “rule- or _” it appears “rule-based”.
Claim 1-15 are objected to because of the following informalities: minor typo.
Claim 1 recite “computer-implemented method”, it appeals “a computer-implemented method”.
Claim 2-8 recite “Method”, it appears “the computer-implemented method”.
Claim 9 recite “System”, it appears “a system”.
Claim 10-15 recite “System”, it appears “the system”.
Claim 3 is objected to because of the following informalities: indefinite.
Claim 3 recite method according to claim 1 or 2. Please clarify the claim is according to 1 or according to 2.
Appropriate corrections are required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101
because the claimed invention is directed to an abstract idea without significantly
more.
When considering subject matter eligibility under 35 U.S.C. 101, it must be
determined whether the claim is directed to one of the four statutory categories of
invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the
claim does fall within one of the statutory categories, the second step in the analysis is
to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A
analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined
whether or not the claims recite a judicial exception (e.g., mathematical concepts,
mental processes, certain methods of organizing human activity). If it is determined in
Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the
second prong (Step 2A, Prong 2), where it is determined whether or not the claims
integrate the judicial exception into a practical application. If it is determined at step 2A,
Prong 2 that the claims do not integrate the judicial exception into a practical
application, the analysis proceeds to determining whether the claim is a patent-eligible
application of the exception (Step 2B). If an abstract idea is present in the claim, any
element or combination of elements in the claim must be sufficient to ensure that the
claim integrates the judicial exception into a practical application, or else amounts to
significantly more than the abstract idea itself. Applicant is advised to consult the 2019
PEG for more details of the analysis.
Step 1
According to the first part of the analysis, in the instant case, claims 1-8, 10-15 are directed to a method, system of trajectory prediction. Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). Step 2A,
Step 2A, Prong 1
Following the determination of whether or not the claims fall within one of the four
categories (Step 1), it must be determined if the claims recite a judicial exception (e.g.
mathematical concepts, mental processes, certain methods of organizing human
activity) (Step 2A, Prong 1). In this case, the claims are determined to recite a judicial
exception as explained below.
Regarding Claims 1, 9 these claims recite
- receiving first input information, the first input information being time-dependent numerical information;
- receiving second input information, the second input information being rule- or knowledge-based information including one or more trajectory prediction information;
-processing the second input information by using an auto-encoder, the auto-encoder being configured to encode the second input information by extracting features from the second input information, thereby obtaining encoded second input information;
-providing the encoded second input information to a fusion network, the fusion network providing transformed information which is obtained by transforming encoded second input information according to properties of the main neural network;
-providing the first input information and the transformed information to the main neural network, the main neural network fusing the first input information and the transformed information in order to provide a trajectory prediction based on the first input information and the transformed information; and
-outputting the trajectory prediction.
The claims recite a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. These are recited at a high level and they are disclosed as a human user performing these functions, simply using a computer as a tool-see spec, page 12, line 20-page 16, line 5, Fig. 1A, 1B. Thus, the claim recites abstract ideas.
Step 2A, Prong 2
Following the determination that the claims recite a judicial exception, it must be
determined if the claims recite additional elements that integrate the exception into a
practical application of the exception (Step 2A, Prong 2). In this case, after considering
all claim elements individually and as an ordered combination, it is determined that the
claims do not include additional elements that integrate the exception into a practical
application of the exception as explained below.
In Prong Two, a claim is evaluated as a whole to determine whether the recited judicial exception is integrated into a practical application of that exception. A claim is not “directed to” a judicial exception, and thus is patent eligible, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). The claims recite an abstract idea and further the claims as a whole does not integrate the recited judicial exception into a practical application of the exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d).
Regarding Claims 1 and 9 these claims
This limitation recites using one or more neural networks as a tool to perform an
abstract idea, which is not indicative of integration into a practical application. MPEP 2106.05(f).)
Step 2B
Based on the determination in Step 2A of the analysis that the claims are
directed to a judicial exception, it must be determined if the claims contain any element
or combination of elements sufficient to ensure that the claim amounts to significantly
more than the judicial exception (Step 2B). In this case, after considering all claim
elements individually and as an ordered combination, it is determined that the claims do
not include additional elements that are sufficient to amount to significantly more than
the judicial exception for the same reasons given above in the Step 2A, Prong 2
analysis. Furthermore, each additional element identified above as being insignificant
extra-solution activity is also well-known, routine, conventional as described below.
Claims 1, 9: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea. The underlying concept merely receives information, analyzes it, and store the results of the analysis – this concept is not meaningfully different than concepts found by the courts to be abstract (see Electric Power Group, collecting information, analyzing it, and displaying certain results of the collection and analysis; see Cybersource, obtaining and comparing intangible data; see Digitech, organizing information through mathematical correlations; see Grams, diagnosing an abnormal condition by performing clinical tests and thinking about the results; see Cyberfone, using categories to organize store and transmit information; see Smartgene, comparing new and stored information and using rules to identify options). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. For example, claim 1 recites the additional elements of “receiving…”, “receiving…” input information, “processing…” input information, “providing…” input information to a network…, “providing…” input information to the NN, “output…” prediction. These elements are recited at a high level of generality and are well-understood, routine, and conventional activities in the computer art. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. Looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims do not amount to significantly more than the abstract idea itself.
Step 2A/2B Prong 2 Dependent Claims
Regarding to claim 2
Claim 2 merely recite other additional elements that define autoencoder which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 3, 10
Claim 3, 10 merely recite other additional elements that define fusion network which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 4, 11
Claim 4, 11 merely recite other additional elements that define fusion network adapts dimensionality which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 5, 12
Claim 5, 12 merely recite other additional elements that define fusion network which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 6, 13
Claim 6, 13 merely recite other additional elements that define transformed information which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 7, 14
Claim 7, 14 merely recite other additional elements that define concatenating the transformed information with features of the hidden layer which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 8, 15
Claim 8, 15 merely recite other additional elements that define increase the dimension of the hidden layer which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 9, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Shun-Feng Sn et al. (Su) “Neural Network Based Fusion of Global and Local Information in Predicting Time Series” Published in SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483), vol. 5, pages 4445-4450, Oct. 2003, DOI: 10.1109/ICSMC.2003.1245684, ISBN:0-7803-7952-7 and Choi US 2021/0035310
In regard to claim 1, Su disclose Computer-implemented method for trajectory prediction based on a main neural network, the method comprising: (page 4445, Abstract, page 4448-4449 section 3.3 “Neural network fusion” trajectory prediction method based on a NN)
- receiving first input information, the first input information being time-dependent numerical information; (page 4447-4449 section 3. Study of Fusing Approaches, 3.3. “Neural network fusion” obtain “global information” which is time series data)
- receiving second input information, the second input information being rule- or knowledge-based information including one or more trajectory prediction information; (page 4447-4449 section 3. Study of Fusing Approaches, 3.3. “Neural network fusion” and 3.4. SONFIN Fusion, obtain “local information” the local information is obtained from the “local prediction”, which is knowledge based information and include the prediction)
-providing the first input information and the transformed information to the main neural network, the main neural network fusing the first input information and the transformed information in order to provide a trajectory prediction based on the first input information and the transformed information; (page 4447-4449, section 3.3. “In this approach, we simply include a FGM result [i.e. the second input information] as another input in the neural network. There are several questions arising in performing such a fusion approach in neural networks. First, there are 8 input points for global data [i.e. “first input information] but only one input point data for local information. [i.e. “second input information”) input two inputs to the NN and the NN fusing the two inputs to provide the prediction) and
-outputting the trajectory prediction. (page 4447-4449, “Neural network fusion” section 3.3. output the prediction results)
But Su fail to explicitly disclose “-processing the second input information by using an auto-encoder, the auto-encoder being configured to encode the second input information by extracting features from the second input information, thereby obtaining encoded second input information;-providing the encoded second input information to a fusion network, the fusion network providing transformed information which is obtained by transforming encoded second input information according to properties of the main neural network;”
Choi disclose -processing the second input information by using an auto-encoder, the auto-encoder being configured to encode the second input information by extracting features from the second input information, thereby obtaining encoded second input information; (Fig. 3-5, [0031]-[0047] processing the input information by an encoder to extract features to get encoded information. Note: please further define the first and second information to help move forward the prosecution)
- providing the encoded second input information to a fusion network, the fusion network providing transformed information which is obtained by transforming encoded second input information according to properties of the main neural network; (Fig. 3-5, [0039]-[0047] input the encoded input to 136 (relation encoder) which provide transformed information to 138 based on “(i) relational inference to encode relational interactions of vehicles using a relational graph, (ii) intention estimation to compute the probability distribution of intentional goals based on the inferred relations from the perceptual context,”)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Choi ‘s method of trajectory prediction into Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Choi ‘s trajectory prediction by providing encoder to extract features would help to provide feature extraction method into Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing feature extraction for the input would help to improve accuracy of trajectory prediction.
In regard to claim 6, Su and Choi disclose Method according to claim1,
But Su fail to explicitly disclose “wherein the transformed information is concatenated with features of a certain hidden layer of main neural network.”
Choi disclose wherein the transformed information is concatenated with features of a certain hidden layer of main neural network. (Fig. 3-5, [0039]-[0054][0077] the converted information is concatenated with features of the layer of the NN)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Choi ‘s method of trajectory prediction into Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Choi ‘s trajectory prediction by concatenating features would help to provide feature identification method into Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing feature identification by concatenating features would help to improve accuracy of trajectory prediction.
In regard to claims 9, 13, claims 9, 13 are system claims corresponding to the method claims 1, 6 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 1,6.
Claims 2-5, 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Shun-Feng Sn et al. (Su) “Neural Network Based Fusion of Global and Local Information in Predicting Time Series” Published in SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483), vol. 5, pages 4445-4450, Oct. 2003, DOI: 10.1109/ICSMC.2003.1245684, ISBN:0-7803-7952-7 and Choi US 2021/0035310 as applied to claim 1, further in view of Bhattacharyya et al. (Bhattacharyya) US 2021/0019619
In regard to claim 2, Su and Choi disclose Method according to claim 1,
But Su and Choi fail to explicitly disclose “wherein the auto-encoder comprises an encoder portion which maps the second input information to a latent feature space comprising lower dimensionality than the second input information.”
Bhattacharyya disclose wherein the auto-encoder comprises an encoder portion which maps the second input information to a latent feature space comprising lower dimensionality than the second input information. ([0008]-[0009] [0094]-[0110] [0132]-[0140] encode mapping a prediction target in a target space to a latent representation in an latent space and the has lower dimension than the target space)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Bhattacharyya‘s method of trajectory prediction into Choi and Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Bhattacharyya’s trajectory prediction by converting the input to a latent space would help to simplify the input into Choi and Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that simplifying the input by converting to a latent space would help to improve performance of trajectory prediction.
In regard to claim 3, Su and Choi, Bhattacharyya disclose Method according to claim 1 or 2,
But Su and Choi fail to explicitly disclose “wherein the fusion network adapts a dimensionality of a feature vector provided by the auto- encoder to a dimensionality of a certain hidden layer of the main neural network.”
Bhattacharyya disclose wherein the fusion network adapts a dimensionality of a feature vector provided by the auto- encoder to a dimensionality of a certain hidden layer of the main neural network. (Fig. 3-5, [0031]-[0047][0099]-[0116] [0127] –[0134] the NN adjust the dimension of the feature vector by the encoder to the dimension of the hidden layer of the NN)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Bhattacharyya‘s method of trajectory prediction into Choi and Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Bhattacharyya’s trajectory prediction by converting the input to a latent space would help to simplify the input into Choi and Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that simplifying the input by converting to a latent space would help to improve performance of trajectory prediction.
In regard to claim 4, Su and Choi, Bhattacharyya disclose Method according to claim 3,
But Su and Choi fail to explicitly disclose “wherein adapting the dimensionality comprises transforming at least one dimension of feature vectors provided by the auto-encoder to at least one dimension of a vector space of the certain hidden layer such that at least one dimension of transformed information is equal to at least one dimension of the vector space of the certain hidden layer.”
Bhattacharyya disclose wherein adapting the dimensionality comprises transforming at least one dimension of feature vectors provided by the auto-encoder to at least one dimension of a vector space of the certain hidden layer such that at least one dimension of transformed information is equal to at least one dimension of the vector space of the certain hidden layer. (Fig. 3-5, [0031]-[0047][0099]-[0116] [0126] -[0134] normalizing the flow by convert the dimension of latent vector provided by the encoder to the dimension of base space of the hidden layer such as the dimensions are the same)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Bhattacharyya‘s method of trajectory prediction into Choi and Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Bhattacharyya’s trajectory prediction by converting the input to a vector space match the vector space of the hidden layer of the NN would help to normalize the input into Choi and Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that simplifying the input by normalize the input would help to improve performance of trajectory prediction.
In regard to claim 5, Su and Choi disclose Method according to claim1,
But Su and Choi fail to explicitly disclose “wherein the fusion network projects the encoded second input information provided by the auto-encoder into a latent subspace of the main neural network.”
Bhattacharyya disclose wherein the fusion network projects the encoded second input information provided by the auto-encoder into a latent subspace of the main neural network. (Fig. 3-5, [0031]-[0047][0099]-[0116] [0126] -[0134] [0184][0191]-[0197] encoded information is projected to 2D using TSNE (which is t-distributed stochastic neighbor embedding) followed by a estimation to a latent representation)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Bhattacharyya‘s method of trajectory prediction into Choi and Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Bhattacharyya’s trajectory prediction by estimating the latent space would help to normalize the input into Choi and Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that simplifying the input by normalize the input by estimating the latent space would help to improve performance of trajectory prediction.
In regard to claims 10-12, claims 10-12 are system claims corresponding to the method claims 3-5 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 3-5.
Claims 7-8, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Shun-Feng Sn et al. (Su) “Neural Network Based Fusion of Global and Local Information in Predicting Time Series” Published in SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483), vol. 5, pages 4445-4450, Oct. 2003, DOI: 10.1109/ICSMC.2003.1245684, ISBN:0-7803-7952-7 and Choi US 2021/0035310 as applied to claim 1, further in view of Andrews et al. (Andrews) US 2022/0014398
In regard to claim 7, Su and Choi disclose Method according to claim 6,
But Su and Choi fail to explicitly disclose “wherein concatenating the transformed information with the features of the certain hidden layer comprises increasing a dimensionality of vector space of a hidden layer.”
Andrews disclose wherein concatenating the transformed information with the features of the certain hidden layer comprises increasing a dimensionality of vector space of a hidden layer. [0029]-[0030]([0034]-[0038][0062]-[0063] concatenating the information with the features include increasing a dimensional vector space of a hidden layer)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Andrews‘s method of information receiver via deep learning into Choi and Su’s invention as they are related to the same field endeavor of method of data detection. The motivation to combine these arts, as proposed above, at least because Andrews‘s data detection with increased dimension of the hidden layer would help to reduce the bit error rate into Choi and Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that reducing the bit error rate by increasing the dimension of the hidden layer would improve performance of the data detection.
In regard to claim 8, Su, Choi and Andrews disclose Method according to claim 7,
But Su and Andrews fail to explicitly disclose “wherein the dimensionality is increased such that the vector space of the hidden layer in which the transformed information is projected is a sum of dimensionality of features resulting from the first input information and resulting from the transformed information.”
Choi disclose wherein the dimensionality is increased such that the vector space of the hidden layer in which the transformed information is projected is a sum of dimensionality of features resulting from the first input information and resulting from the transformed information. (Fig. 3-5, [0003]-[0007] [0034]-[0054][0077] perform element-wise of sum to produce the representation which is the vector space of the hidden layer in which the converted information generates an estimation result from the feature extractor result and interaction result)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Choi ‘s method of trajectory prediction into Andrews and Su’s invention as they are related to the same field endeavor of method of prediction. The motivation to combine these arts, as proposed above, at least because Choi ‘s trajectory prediction by summation of features vectors would help to provide feature vector space into Andrews and Su’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing feature vector space by adding features vectors would help to improve accuracy of trajectory prediction.
In regard to claims 14-15, claims 14-15 are system claims corresponding to the method claims 7-8 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 7-8.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure.
U.S. Patent Documents PATENT DATE INVENTOR(S) TITLE
US 20210097266 A1 2021-04-01 MANGALAM et al.
DISENTANGLING HUMAN DYNAMICS FOR PEDESTRIAN LOCOMOTION FORECASTING WITH NOISY SUPERVISION
MANGALAM et al. disclose A method for predicting spatial positions of several key points on a human body in the near future in an egocentric setting is described. The method includes generating a frame-level supervision for human poses. The method also includes suppressing noise and filling missing joints of the human body using a pose completion module. The method further includes splitting the poses into a global stream and a local stream. Furthermore, the method includes combining the global stream and the local stream to forecast future human locomotion… see abstract.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUYANG XIA whose telephone number is (571)270-3045. The examiner can normally be reached Monday-Friday 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
XUYANG XIA
Primary Examiner
Art Unit 2143
/XUYANG XIA/Primary Examiner, Art Unit 2143