Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the Amendment filed on 7/14/2025.
Claims 1-23 are pending in the case.
Drawings
The drawings were received on 7/14/2025 are accepted.
Response to Arguments
Applicant's arguments and amendments with regards to the 35 U.S.C. § 101 rejection of claim(s) 9-16 and 21-23 have been fully considered and are persuasive. The 35 U.S.C. § 101 rejection of claim(s) 9-16 and 21-23 is respectfully withdrawn.
Applicant’s arguments and amendments with regards to the 35 U.S.C. § 102 and 103 rejection of claim(s) 1-16 have been considered, but are not persuasive. Applicant argues that the cited references fail to teach the new limitations in the current amended claims.
Applicant's arguments with respect to claim(s) 1-16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments and amendments with regards to the 35 U.S.C. § 102 and 103 rejection of claim(s) 17-23 have been considered, but are not persuasive. Applicant argues that the amended claims are allowable due to the following:
Regarding claim(s) 17-23 applicant argues
PNG
media_image1.png
395
627
media_image1.png
Greyscale
Examiner respectfully disagrees. Examiner asserts that
Bianchi Sections 2.1 and 2.2 discloses that hidden states (intermediate) are calculated in successive layers, and
Bianchi Sections 2.1 and 2.2, discloses input feature matrix if determined (F-dimensional feature vectors for each vertex, row-wise grouping of feature vectors may be performed), and
Bianchi Section 2.2 discloses using forward propagations between layers and using feature vectors, adjacency matrix and edge matrix- intermediate state of nodes (initialized to zero) are iterated until convergence is achieved.
Therefore Bianchi sufficiently teaches determine a first intermediate state of a node associated with the deep equilibrium GNN based on an input state of the node, an adjacency matric and an edge feature matrix; determine a second intermediate state of the node based on the first intermediate state and initial features associated with the node; determine a third intermediate state of the node based on the second intermediate state, the adjacency matrix and the edge feature matrix; and determine an equilibrium state of the node based on the third intermediate state and the first intermediate state.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 9 each recite “conduct a reconstruction of the input vertex feature matrix during one or more backward propagations using only the outputs of the last block of the reversible GNN; and exclude the adjacency matrix and the edge feature matrix from the reconstruction”.
It is unclear as to what is excluded in the conducting of the reconstruction itself in “using only the outputs of the last block of the reversible GNN”, rendering the claims indefinite.
Examiner further notes that the “exclude the adjacency matrix and the edge feature matrix from the reconstruction” limitation,
would be interpreted as exclude “the adjacency matrix and the edge feature matrix” from the reconstructed input vertex feature matrix,
rather than exclude “the adjacency matrix and the edge feature matrix” from the process of reconstructing the input vertex feature matrix.
Claim(s) 2-8 and 10-16 do not contain claim limitations that cure the indefiniteness of claim(s) 1 and 9 respectively, and therefore are also indefinite under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 17-23, are rejected under 35 U.S.C. 103 as being unpatentable over Bianchi et al “Pyramidal Reservoir Graph Neural Network”, dated 10 April 2021 and retrieved from https://arxiv.org/pdf/2104.04710, in view of Lee (US 20200285944 A1) and Ren (US 20210158127 A1).
Regarding claim 17, Bianchi teaches an ...apparatus... to train a ... deep equilibrium graph neural network (GNN) (Bianchi Abstract, Section 4- last paragraph, Section 6- experiment, GNN may be trained, experiments performed on hardware infrastructure of processor(s) and graphic processing unit(s) (GPU), Bianchi Note 1 on Page 3, GNN may be deep equilibrium GNN):
determine a first intermediate state of a node associated with the deep equilibrium GNN based on an input state of the node, an adjacency matric and an edge feature matrix; determine a second intermediate state of the node based on the first intermediate state and initial features associated with the node; determine a third intermediate state of the node based on the second intermediate state, the adjacency matrix and the edge feature matrix; and determine an equilibrium state of the node based on the third intermediate state and the first intermediate state (Bianchi Sections 2.1 and 2.2, input feature matrix if determined (F-dimensional feature vectors for each vertex, row-wise grouping of feature vectors may be performed), Bianchi Section 2.2, using forward propagations between layers and using feature vectors, adjacency matrix and edge matrix- intermediate state of nodes (initialized to zero) are iterated until convergence is achieved, hidden states (intermediate) are calculate in successive layers).
Bianchi does not specifically teach a semiconductor apparatus to train a deep equilibrium graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates.
However Lee teaches an ...apparatus to train a ... graph neural network (GNN), the ... apparatus comprising: one or more ... integrated circuits...; and logic coupled to the one or more integrated circuits, wherein the logic is implemented at least partly in ...configurable logic or ..., the logic coupled to the one or more integrated circuits to (Lee [32, 41, 146, 147] apparatus to train reversible GNN, implemented using instructions executed using integrated circuit(s)).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have
substituted the generic apparatus... to train a ... deep equilibrium graph neural network of Bianchi,
with the apparatus to train a ... graph neural network (GNN), the ... apparatus comprising: one or more ... integrated circuits...; and logic coupled to the one or more integrated circuits, wherein the logic is implemented at least partly in ...configurable logic or ..., the logic coupled to the one or more integrated circuits taught by Lee,
to achieve the predictable result of an ...apparatus... to train a ... deep equilibrium graph neural network (GNN), the ... apparatus comprising: one or more ... integrated circuits...; and logic coupled to the one or more integrated circuits, wherein the logic is implemented at least partly in ...configurable logic or ..., the logic coupled to the one or more integrated circuits (Lee [32, 41, 146, 147]).
Bianchi and Lee does not specifically teach a semiconductor apparatus to train a deep equilibrium graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates.
However Ren teaches a semiconductor apparatus to train a ... graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates (Ren [7, 77, 78, 116, 117, 121] apparatus to train a GNN using processor(s) and GPU(s) which may be implemented as logic executed by modules implemented using substrate).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention,
to have substituted the apparatus to train a deep equilibrium graph neural network (GNN) comprising generic integrated circuit of Bianchi and Lee,
with the semiconductor apparatus to train a ... graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates taught by Ren,
to achieve the predictable result of semiconductor apparatus to train a deep equilibrium graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates (Ren [7, 77, 78, 116, 117, 121]).
Regarding claim 18, Bianchi, Lee and Ren teach the invention as claimed in claim 17 above. Bianchi further teaches wherein the logic ... is to initialize the input state to zeroes for an initial iteration (Bianchi Section 2.2, using forward propagations between layers and using feature vectors, adjacency matrix and edge matrix- intermediate state of nodes (initialized to zero) are iterated until convergence is achieved).
Regarding claim 19 Bianchi, Lee and Ren teach the invention as claimed in claim 17 above. Claim 1 teaches a deep equilibrium GNN.
Bianchi further teaches wherein the logic ... is to share weights across two or more layers of a block of the deep equilibrium GNN (Bianchi Pg 4, second-to-last paragraph co-efficients (weights) may shared across layers).
Regarding claim 20, Bianchi, Lee and Ren teach the invention as claimed in claim 17 above. Bianchi does not specifically teach wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.
However Ren teaches wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates (Ren [40] transistor channel regions may be used).
Claim 21 is directed towards a medium comprising instructions similar in scope to the instructions performed by the method of claim 17 and is rejected under the same rationale. Lee further teaches at least one non-transitory computer readable storage medium comprising a set of instructions to train a ... graph neural network (GNN), wherein when executed by a computing system, the set of instructions cause the computing system to (Lee [150).
Claim(s) 22, 23 is/are dependent on claim 21 above, is/are directed towards a medium comprising instructions similar in scope to the instructions performed by the method of claim(s) 18, 19 respectively, and is/are rejected under the same rationale.
Claims 1-7, 9-15, are rejected under 35 U.S.C. 103 as being unpatentable over Bianchi et al “Pyramidal Reservoir Graph Neural Network”, dated 10 April 2021 and retrieved from https://arxiv.org/pdf/2104.04710, in view of Lee (US 20200285944 A1), Ren (US 20210158127 A1) and Liu et al "Graph Normalizing Flows" dated 30 May 2019 and retrieved from https://arxiv.org/pdf/1905.13177 .
Liu was cited in the PTO-892 form dated 2/13/2025.
Regarding claim 1, Bianchi teaches an ...apparatus... to train a ... graph neural network (GNN) (Bianchi Abstract, Section 4- last paragraph, Section 6- experiment, GNN may be trained, experiments performed on hardware infrastructure of processor(s) and graphic processing unit(s) (GPU):
partition an input vertex feature matrix into a plurality of groups (Bianchi Sections 2.1 and 2.2, input feature matrix if determined (F-dimensional feature vectors for each vertex, row-wise grouping of feature vectors may be performed));
generate, via a ... block of the ... GNN, outputs for the plurality of groups based on an adjacency matrix and an edge feature matrix, wherein the outputs are generated during one or more forward propagations (Bianchi Section 2.2, using forward propagations between layers and using feature vectors, adjacency matrix and edge matrix- graph embeddings may be computed).
Bianchi does not specifically teach a semiconductor apparatus to train a reversible graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to: ... generate, via a last block of the reversible GNN, outputs ...during one or more forward propagations; conduct a reconstruction of the input vertex feature matrix during one or more backward propagations using only the outputs of the last block of the reversible GNN.
However Lee teaches an apparatus to train a ... graph neural network (GNN), the ... apparatus comprising: one or more ... integrated circuits...; and logic coupled to the one or more integrated circuits, wherein the logic is implemented at least partly in ...configurable logic or ..., the logic coupled to the one or more integrated circuits to (Lee [32, 41, 146, 147] apparatus to train ... GNN, implemented using instructions executed using integrated circuit(s)):
conduct a reconstruction of the input vertex feature matrix during one or more backward propagations (Lee [33, 49, 143, 144] based on adjacency matrix- input vertex feature matrix (feature matrix) may be constructed); and
exclude the adjacency matrix and the edge feature matrix from the reconstruction (Lee [143, 144] constructed input vertex feature matrix is separate from adjacency matrix and the edge feature matrix).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Lee of an apparatus to train a ... graph neural network (GNN), the ... apparatus comprising: one or more ... integrated circuits...; and logic coupled to the one or more integrated circuits, wherein the logic is implemented at least partly in ...configurable logic or ..., the logic coupled to the one or more integrated circuits to: conduct a reconstruction of the input vertex feature matrix during one or more backward propagations; and exclude the adjacency matrix and the edge feature matrix from the reconstruction, into the invention suggested by Bianchi; since both inventions are directed towards using GNNs to analyze graphs using feature matrix, adjacency matrix and edge matrix, and incorporating the teaching of Lee into the invention suggested by Bianchi would provide the added advantage of allowing features to be added for nodes in the feature matrix, and the combination would perform with a reasonable expectation of success (Lee [32, 41, 143, 144, 146, 147]).
Bianchi and Lee does not specifically teach a semiconductor apparatus to train a ... graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is ... coupled to the one or more substrates to: generate, via a last block of the reversible GNN, outputs ...during one or more forward propagations; conduct a reconstruction of the input vertex feature matrix during one or more backward propagations using only the outputs of the last block of the reversible GNN.
However Ren teaches a semiconductor apparatus to train a ... graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in ...configurable logic ..., the logic coupled to the one or more substrates to perform functions (Ren [7, 77, 78, 116, 117, 121] apparatus to train a GNN using processor(s) and GPU(s) which may be implemented as logic executed by modules implemented using substrate).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention,
to have substituted the apparatus to train a ... graph neural network (GNN) comprising generic integrated circuit of Bianchi and Lee,
with the semiconductor apparatus to train a ... graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to processor(s) embodied on one or more substrates taught by Ren,
to achieve the predictable result of semiconductor apparatus to train a ... graph neural network (GNN), the semiconductor apparatus comprising: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates (Ren [7, 77, 78, 116, 117, 121]).
Bianchi, Lee and Ren does not specifically teach generate, via a last block of the reversible GNN, outputs ...during one or more forward propagations; conduct a reconstruction of the input vertex feature matrix during one or more backward propagations using only the outputs of the last block of the reversible GNN
However Liu teaches generate, via a last block of the reversible GNN, outputs for the plurality of groups based on an adjacency matrix and an edge feature matrix, wherein the outputs are generated during one or more forward propagations; conduct a reconstruction of the input vertex ...feature information... during one or more backward propagations using only the outputs of the last block of the reversible GNN (Liu Intro, Secs 3.1-3.3, Fig. 1a, reversible GNN higher layers (last block) generates outputs based on adjacency matrix, edge feature matrix, and neighboring nodes (groups), using back propagation and higher layers’ outputs (last block)- input vertex ...feature information ...may be reconstructed, only O(#nodes) states need to be stored thereby saving memory)
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Liu of a generate, via a last block of the reversible GNN, outputs for the plurality of groups based on an adjacency matrix and an edge feature matrix, wherein the outputs are generated during one or more forward propagations; conduct a reconstruction of the input vertex ...feature information... during one or more backward propagations using only the outputs of the last block of the reversible GNN, into the invention suggested by Bianchi, Lee and Ren; since both inventions are directed towards using GNNs to analyze graphs using feature matrix, adjacency matrix and edge matrix, and incorporating the teaching of Liu into the invention suggested by Bianchi, Lee and Ren would provide the added advantage of saving memopry be needing to store only O(#nodes) states, and the combination would perform with a reasonable expectation of success (Liu Intro, Secs 3.1-3.3, Fig. 1a).
Regarding claim 2, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 1 above. Claim 1 teaches a reversible GNN. Bianchi further teaches wherein the logic coupled to the one or more substrates is to share weights across two or more layers of the block of the ... GNN (Bianchi Pg 4, second-to-last paragraph co-efficients (weights) may shared across layers).
Regarding claim 3, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 2 above. Bianchi further teaches wherein the weights are shared in a group-wise manner (Bianchi Section 2.2 co-efficients (weights) may shared across layers, con-efficients may be based on grouping).
Regarding claim 4, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 1 above. Claim 1 teaches a reversible GNN. Bianchi does not specifically teach embed one or more normalized layers in the block of the reversible GNN; and embed one or more drop out layers in the block of the reversible GNN.
However,
Ren teaches embed one or more normalized layers in the block of the ... GNN (Ren [7, 33, 34] layers in the GNN may be normalized); and
Lee teaches embed one or more drop out layers in the block of the ... GNN (Lee [46] drop-out layers may be embedded in GNN).
Regarding claim 5, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 4 above. Bianchi does not specifically teach wherein the logic coupled to the one or more substrates is to share a drop out pattern across two or more of the drop out layers
However Lee teaches wherein the logic coupled to the one or more substrates is to share a drop out pattern across two or more of the drop out layers (Lee [116, 136] drop-out layers may be used for multiple layers, drop-out values may be based on optimization results and used for multiple layers).
Regarding claim 6, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 1 above. Bianchi does not specifically teach wherein the outputs are computed for the plurality of groups in parallel.
However Lee teaches wherein the outputs are computed for the plurality of groups in parallel (Lee [158] processes may be performed in parallel).
Regarding claim 7, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 1 above. Bianchi does not specifically teach wherein a memory complexity of the forward propagation and the backward propagation is independent of a number of layers in the block of the reversible GNN.
However Liu teaches wherein a memory complexity of the forward propagation and the backward propagation is independent of a number of layers in the block of the reversible GNN (Liu Intro memory complexity is O(#nodes), since only O(#nodes) states need to be stored).
Regarding claim 8, Bianchi, Lee, Ren and Liu teach the invention as claimed in claim 1 above. Bianchi does not specifically teach wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.
However Ren teaches wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates (Ren [40] transistor channel regions may be used).
Claim 9 is directed towards a medium comprising instructions similar in scope to the instructions performed by the method of claim 1, and is rejected under the same rationale. Lee further teaches at least one non-transitory computer readable storage medium comprising a set of instructions to train a ... graph neural network (GNN), wherein when executed by a computing system, the set of instructions cause the computing system to (Lee [150).
Claim(s) 10, 11, 12, 13, 14, 15and 16, is/are dependent on claim 9 above, is/are directed towards a medium comprising instructions similar in scope to the instructions performed by the method of claim(s) 2, 3, 4, 4, 5 and 6 respectively, and is/are rejected under the same rationale.
Claims 8, 16, are rejected under 35 U.S.C. 103 as being unpatentable over Bianchi in view of Lee (US 20200285944 A1) and Ren (US 20210158127 A1), and further in view of Bai (US 20210042606 A1).
Regarding claim 8, Bianchi, Lee and Ren teach the invention as claimed in claim 1 above. Claim 1 teaches a reversible GNN.
Bianchi does not specifically teach wherein a memory complexity of the forward propagation and the backward propagation is independent of a number of layers in the block of the reversible GNN.
However Bai teaches wherein a memory complexity of the forward propagation and the backward propagation is independent of a number of layers in the block of the ...neural network (Bai [3, 9, 14, 139, 173] using the invention makes the memory complexity independent of the number of layers).
It would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention, to have incorporated the concept taught by Bai of wherein a memory complexity of the forward propagation and the backward propagation is independent of a number of layers in the block of the ...neural network..., into the invention suggested by Bianchi, Lee and Ren; since both inventions are directed towards training neural netowkrs with multiple layers, and incorporating the teaching of Bai into the invention suggested by Bianchi, Lee and Ren would provide the added advantage of allowing a neural network to be trained with multiple layers while using limited memory, and the combination would perform with a reasonable expectation of success (Bai [3, 9, 14, 139, 173]).
Claim(s) 16, is/are dependent on claim 9 above, is/are directed towards a medium comprising instructions similar in scope to the instructions performed by the method of claim(s) 8, and is/are rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gu et al "Implicit Graph Neural Networks", dated 1 June 2021 and retrieved from https://arxiv.org/pdf/2009.06211 discloses capturing dependencies in GNNs based on the solution of a fixed-point equilibrium.
Kipf et al "Semi-supervised Classification with Graph Convolution Networks", retrieved from https://openreview.net/pdf?id=SJU4ayYgl and published as a conference paper at ICLR 2017, discloses using a localized first-order approximation of spectral graph convolutions.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANCHITA ROY whose telephone number is (571)272-5310. The examiner can normally be reached Monday-Friday 12-8.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SANCHITA . ROY
Primary Examiner
Art Unit 2146
/SANCHITA ROY/Primary Examiner, Art Unit 2146