DETAILED ACTION
This Office Action is in response to the communication filed on 2/25/2026.
Claim 5 has been cancelled.
Claims 1-4 and 6-18 are pending.
Claims 1-4 and 6-18 are rejected.
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered.
The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner’s Notes
The last limitation of claims 1 and 11 recites “and providing, by the central aggregator, the encrypted global model.” This is broadly interpreted as “Providing… a model” which is VERY broad. For example, no transmission of the model is required to provide the encrypted global model to the central aggregator because the central aggregator generates the encrypted global model. Examiner recommends listing the intended recipient.
Additionally, “the encrypted global model” is first introduced in the preamble as “an encrypted global model”, which examiner is interpreting as intent to make the preamble necessary for antecedent basis purposes. This has the effect of making the preamble limiting, if this as not applicant’s intent please make appropriate corrections.
Response to Arguments
Applicant's arguments filed 2/25/2026 have been fully considered but they are not persuasive. On Page 9-10 of Remarks, Applicant argues that neither Zheng nor Sav disclose aggregating encrypted client data. Examiner disagrees. Zheng discloses aggregating model parameters; that the model parameters correspond to the data party; and that the model parameters can be encrypted. Therefore Zheng discloses aggregating (“aggregate model parameters by using a secure and trusted method”) encrypted (“data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption”) client data (“local data”)
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Sav provides a motivation to use matrix multiplication in order for the purpose of optimization and in order to allow the alternate packing (AP) approach.
On page 10-11 of Remarks, Applicant asserts that in the Advisory Action filed 2/2/2026 Examiner admitted that the features of claims 6 and 7 are not disclosed in the prior art. The statements in the Advisory Action indicated that the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981).
On Page 11 of Remarks, Applicant argues, with respect to claim 6, have been considered and are now moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
On Page 11 of Remarks, Applicant argues, with respect to claim 7, have been considered and are now moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Specifically Sav is cited for disclosing the claimed features.
Applicant argues that the dependent claims are allowable based on dependency. Examiner does not find dependent claims to be allowable because the independent claims not allowable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6 and 8-17 are rejected under 35 U.S.C. 103 as being unpatentable over Zheng (U.S. 20240037252, based on priority from PCT/CN2022/085876 and/or CN202110390904.x), in view of Sav (U.S. 20230325529).
Regarding claim 1,
Zheng discloses: A computer-implemented method for providing an (Zheng [0005-0024] methods for jointly updating a service model)
providing, by a central aggregator, an initial global model to a first client; (Zheng [Fig. 2-202, 0005-0024, 0032-0036, 0050-0052] teaches providing, by a serving party (central aggregator) a current (initial) global model to multiple data parties (clients).
training, by the first client, the initial global model based on first training data, to generate a first client trained model; (Zheng [Fig. 2-203, 0005-0024, 0032-0036, 0053-0060] teaches each data party using the global model parameters to generate a new local service model; [0054] For a single data party, a full update phase can be a phase of fully updating model parameters in a process of training a local service model by using local service data; [0070] update an updated local service model based on local service data to obtain a new local service model).
determining, by the first client, first client data based on the first client trained model, wherein (Zheng [Fig. 2-203, 0005-0024, 0032-0036, 0053-0060] teaches each data party using the global model parameters to generate a new local service model; [0061-0064] teaches parameter groups/sets which are being determined by the data party and sent to the serving party to update the global model; [0070] and further update an updated local service model based on local service data to obtain a new local service model, to upload model parameters in a parameter group corresponding to the data party to the serving party).
homomorphically encrypting, by the first client, the first client data based on a first encryption key, homomorphically encrypting of the first client data is based on (Zheng [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
sending, by the first client, the encrypted first client data to the central aggregator; (Zheng [Fig. 2-203, 0005-0024, 0032-0036, 0053-0060, 0070] to upload model parameters in a parameter group corresponding to the data party to the serving party; [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
determining, by the central aggregator, the (Zheng [Fig. 2-203, 0005-0024, 0032-0036, 0053-0060] ; [0070] the serving party is further configured to fuse, for each parameter group, the received model parameters to update the global model parameters; [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
providing, by the central aggregator, the (Zheng [Fig. 2-203, 0005-0024, 0032-0036, 0053-0060, 0070] update the global model parameters; [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
Zheng does not explicitly disclose:
wherein the first client data comprises a first matrix of numbers
wherein the first encryption key comprises a second matrix of numbers, wherein the homomorphical encryption of the first client data is based on a matrix multiplication of the first client data and the first encryption key, to generate encrypted first client data;
encrypted global model
by performing a matrix multiplication of the encrypted first client data with the encrypted second client data
However, in the same field of endeavor Sav discloses: wherein the first encryption key comprises a second matrix of numbers, wherein the homomorphical encryption of the first client data is based on a matrix multiplication of the first client data and the first encryption key, to generate encrypted first client data; (Sav [0014-0016, 0022, 0087-0099] teaches that homomorphic encryption uses matrix multiplication (as keys) to perform the encryption operation using matrix multiplication)
encrypted global model (Sav [0043-0044, 0049, 0149, 0152] teaches encrypted global models)
While Zheng specifically discloses performing operations on first and second client data to determining the global model Zheng does not explicitly disclose the multiplication operation and doing so by performing a matrix multiplication of the encrypted first client data with the encrypted second client data
However, in the same field of endeavor Sav discloses determining the encrypted global model by performing a matrix multiplication of the encrypted first client data with the encrypted second client data (Sav [0015-0019, 0029, 0085-0096, 0125-0141, 150, Fig. 6A-Pi] teaches performing operations including forward and backward pass, which include multiplication; Existing packing strategies that are commonly used for machine learning operations on encrypted data (e.g., the row-based or diagonal packing), require a high number of rotations for the execution of the matrix-matrix multiplications and matrix transpose operations, performed during the forward and backward pass of the local gradient descent computation (see Protocol 2).
Zheng and Sav are analogous art because they are from the same field of endeavor Multi-Party federated machine learning using homomorphic encryption.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Zheng and Sav before him or her, to modify the method of Zheng to include the encrypting the model and matrix multiplications of Sav because it will allow for more secure training of ML models.
The motivation for doing so would be [“Packing enables coding a vector of values in a ciphertext and to parallelize the computations across its different slots, thus significantly improving the overall performance; When relying on packing capabilities, computation of the inner-sum of vector-matrix multiplications and transpose operation implies a restructuring of the vectors.”] (Para.0087-0096, 0098-0122 by Sav)].
Therefore, it would have been obvious to combine Zheng and Sav to obtain the invention as specified in the instant claim.
Regarding claim 11,
Zheng discloses: A providing system for providing an encrypted global model trained by federated learning, comprising: (Zheng [0069] According to one or more embodiments of another aspect, a system for jointly updating a service model is provided)
Claim 11 recites limitations substantially similar to those recited in claim 1 and is therefore rejected under the same rational as claim 1.
Regarding claim 2,
Zheng in view of Sav discloses: The computer-implemented method of claim 1, further comprising:
providing, by the central aggregator, the initial global model to a second client; (Zheng [Fig. 2-202, 0050-0052] teaches providing, by a serving party (central aggregator) a current (initial) global model to multiple data parties (clients).
training, by the second client, the provided initial global model based on second training data, to generate a second client trained model; (Zheng [Fig. 2-203, 0053-0060] teaches each data party using the global model parameters to generate a new local service model; [0054] For a single data party, a full update phase can be a phase of fully updating model parameters in a process of training a local service model by using local service data; [0070] update an updated local service model based on local service data to obtain a new local service model).
determining, by the second client, second client data based on the second client trained model, (Zheng [Fig. 2-203, 0053-0060] teaches each data party using the global model parameters to generate a new local service model; [0061-0064] teaches parameter groups/sets which are being determined by the data party and sent to the serving party to update the global model; [0070] and further update an updated local service model based on local service data to obtain a new local service model, to upload model parameters in a parameter group corresponding to the data party to the serving party).
homomorphically encrypting, by the second client, the second client data based on a second encryption key, (Zheng [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
sending, by the second client, the encrypted second client data to the central aggregator to determine the encrypted global model. (Zheng [Fig. 2-203, 0053-0060, 0070] to upload model parameters in a parameter group corresponding to the data party to the serving party; [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
Zheng does not explicitly disclose:
wherein the second encryption key comprises a fourth matrix of numbers, wherein homomorphical encryption of the second client data is based on a matrix multiplication of the second client data and the second encryption key, to generate the encrypted second client data;
encrypted global model
However, in the same field of endeavor Sav discloses: wherein the second encryption key comprises a fourth matrix of numbers, wherein homomorphical encryption of the second client data is based on a matrix multiplication of the second client data and the second encryption key, to generate the encrypted second client data; (Sav [0014-0016, 0022, 0087-0099] teaches that homomorphic encryption uses matrix multiplication (as keys) to perform the encryption operation using matrix multiplication)
encrypted global model (Sav [0043-0044, 0049, 0149, 0152] teaches encrypted global models)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Regarding claims 3 and 14,
Zheng in view of Sav discloses: The computer-implemented method of claim 1, wherein the determining the first client data comprises: determining a matrix of model weights as the first matrix of numbers, wherein the model weights are weights of the first client trained model. (Zheng [Fig. 5a-5e]; [0014-0017, 0022-0038] teaches determining model weights using local (client) data)
Regarding claims 4 and 15,
Zheng in view of Sav discloses: The computer-implemented method of claim 1, wherein the determining the first client data comprises:
Zheng does not explicitly disclose: determining a matrix of metadata as the first matrix of numbers, wherein the metadata comprise a population size, number of data samples or cohort on which the initial global model has been trained.
However, in the same field of endeavor Sav discloses: determining a matrix of metadata as the first matrix of numbers, wherein the metadata comprise a population size, number of data samples or cohort on which the initial global model has been trained. (Sav [Fig. 5a-5e]; [0014-0017, 0022-0038] teaches determining model weights using local (client) data; [0034] data providers use the same training parameters (e.g., the number of hidden layers, the number of neurons in each layer, the learning rate, the number of global iterations, the activation functions to be used in each layer and their approximations, and the local batch size wherein the local batch size is the number of data samples processed at each iteration; [0035] The predefined maximum number of global iterations is the agreed limit for performing global iterations)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Regarding claims 6 and 17,
Zheng in view of Sav discloses: The computer-implemented method of claim 1, further comprising:
receiving, by a client, the encrypted global model from the central aggregator; (Zheng [Fig. 2-203, 0053-0060, 0070] to upload model parameters in a parameter group corresponding to the data party to the serving party; [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
providing, by the client, the global model. (Zheng [Fig. 2-203, 0053-0060, 0070] update the global model parameters; [0061] the data uploaded by the data party to the serving party can be further encrypted in a pre-agreed method, such as homomorphic encryption or secret sharing, to further protect data privacy)
Zheng does not explicitly disclose: decrypting, by the client, the encrypted global model based on a matrix multiplication of the encrypted global model and an inverse of the first encryption key, to generate a global model; and
However, in the same field of endeavor Sav discloses: decrypting, by the client, the encrypted global model based on a matrix multiplication of the encrypted global model and an inverse of the first encryption key, to generate a global model; and (Sav [0043-0044, 0049, 0149, 0152] Each destination entity can decrypt the received global model with the secret-key related to its own destination public key and thus obtain the corresponding decrypted global model; [0014-0016, 0022, 0087-0099] teaches that homomorphic encryption uses matrix multiplication (as keys) to perform the encryption operation using matrix multiplication)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Regarding claim 8,
Zheng in view of Sav discloses: The computer-implemented method of claim 2,
Zheng does not explicitly disclose: wherein at least one of the first client data or the second client data are integer matrices.
However, in the same field of endeavor Sav discloses: wherein at least one of the first client data or the second client data are integer matrices. (Sav [0014] At iteration k, the weights between layers j and j+1, are denoted by a matrix W.sub.j.sup.k, whereas the matrix L.sub.j represents the activation of the neurons in the j.sup.th layer)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Regarding claim 9,
Zheng in view of Sav discloses: The computer-implemented method of claim 1,
Zheng does not explicitly disclose: wherein the first client data and the first encryption key are matrices over a finite field.
However, in the same field of endeavor Sav discloses: wherein the first client data and the first encryption key are matrices over a finite field. (Sav [0122] Instead of taking the transpose of the weight matrix, one replicates the values in the vector that will be multiplied with the transposed matrix (for the operation in Line-11, Protocol 2), leveraging the gaps between slots with the AP approach. That is, for a vector v of size k and the column-packed matrix W of size g k)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Regarding claim 10,
Zheng in view of Sav discloses: The computer-implemented method of claim 1, further comprising:
receiving, by the first client, the encrypted global model from the central aggregator; (Zheng [Fig. 2-202, 0050-0052] teaches providing, by a serving party (central aggregator) a current (initial) global model to multiple data parties (clients).
decrypting, by the first client, the encrypted global model, to generate a global model; and (Zheng [044] Each destination entity can decrypt the received global model with the secret-key related to its own destination public key and thus obtain the corresponding decrypted global model)
verifying, by the first client, the global model. (Zheng [0058] After updating the local service model by using the current global model parameters provided by the serving party, the single data party processes a local verification set by using the updated local service model to obtain an accuracy rate)
Regarding claim 12,
Zheng in view of Sav discloses: A non-transitory computer-readable medium comprising instructions which, when executed by a providing system, cause the providing system to carry out the method of claim 1. (Zheng [0023] According to an eighth aspect, a computing device is provided, including a memory and a processor. The memory stores executable code, and when executing the executable code, the processor implements the method according to the second aspect or the third aspect)
Regarding claim 13,
Zheng in view of Sav does not disclose: A non-transitory computer-readable medium comprising instructions which, when executed by a providing system, cause the providing system to carry out the method of claim 2. (Zheng [0023] According to an eighth aspect, a computing device is provided, including a memory and a processor. The memory stores executable code, and when executing the executable code, the processor implements the method according to the second aspect or the third aspect)
Regarding claim 16,
Zheng in view of Sav discloses: The computer-implemented method of claim 1,
Zheng does not explicitly disclose: wherein the determining the encrypted global model comprises: a matrix multiplication of the encrypted first client data with the encrypted second client data.
However, in the same field of endeavor Sav discloses: wherein the determining the encrypted global model comprises: a matrix multiplication of the encrypted first client data with the encrypted second client data. (Sav [0014-0016, 0022, 0087-0099] teaches that homomorphic encryption uses matrix multiplication (as keys) to perform the encryption operation using matrix multiplication; [0040] The at least one data provider (i.e. the data provider(s) of the combiner-subset who performed the homomorphic combination) then updates the weights of the current global model based on the combined aggregated gradients. In other words, the global model is updated from its previous state by using averaged combined aggregated gradients wherein averaging is performed with respect to the global batch size being the dot product between the local batch size and the number of data providers [0033-0044] teaches the aggregation of client data to create a global model, which is generated by perform matrix multiplication on client induvial inputs)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zheng (U.S. 20240037252, based on priority from PCT/CN2022/085876 and/or CN202110390904.x), in view of Sav (U.S. 20230325529), and in further view of Kawano (U.S. 20190197426).
Regarding claims 7 and 18,
Zheng in view of Sav discloses: The computer-implemented method of claim 1, further comprising:
Zheng does not explicitly disclose: generating a random integer matrix;
providing the first encryption key to at least one of the first client, a second client, or the central aggregator.
However, in the same field of endeavor Sav discloses: generating a random integer matrix; (Sav [0023-0024] W.sub.j=r×h.sub.j−1, where r is a random number sampled from a uniform distribution in the range [−1,1])
providing the first encryption key to at least one of the first client, a second client, or the central aggregator. (Sav [0033] each of the data providers has a portion of a cryptographic distributed secret key)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Sav for similar reasons as cited in claim 1.
Zheng in view of Sav does not explicitly disclose: determining a unimodular integer matrix, wherein a matrix product of the unimodular integer matrix and the random integer matrix equals a hermite normal form of the random integer matrix;
determining the first encryption key, wherein the first encryption key comprises the matrix product of the unimodular integer matrix, of an exchange matrix and of an inverse of the unimodular integer matrix; and
However, in the same field of endeavor Kawano teaches: determining a unimodular integer matrix, wherein a matrix product of the unimodular integer matrix and the random integer matrix equals a hermite normal form of the random integer matrix; (Kawano [0003-0012, 0046, 0077-0098] teaches learning with errors for next-generation cryptography by determining a hermite normal form of unimodular matrices and random matrices)
determining the first encryption key, wherein the first encryption key comprises the matrix product of the unimodular integer matrix, of an exchange matrix and of an inverse of the unimodular integer matrix; and (Kawano [0003-0012, 0046, 0077-0098] teaches that learning with errors for next-generation cryptography includes performing mathematical operations on unimodular, exchange and inverse unimodular matrices to achieve computational results, which can be used for key generation)
Zheng in view of Sav and Kawano are analogous art because they are from the same field of endeavor of cryptography.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Zheng in view of Sav and Kawano before him or her, to modify the method of Zheng in view of Sav to include the unimodular integer matrix, of an exchange matrix and of an inverse of the unimodular integer matrix of Kawano because it will allow for improved encryption schemes.
The motivation for doing so would be [“ The dual basis of modulo N can be efficiently computed using the Hermite normal form”] (Paragraph 0081-0087 by Kawano)].
Therefore, it would have been obvious to combine Zheng in view of Sav and Kawano to obtain the invention as specified in the instant claim.
Note: The specific language of claim 7 is non-function design choice language. Design choice applies when old elements in the prior art perform the same function as the now claimed structures. "design choice" is appropriate where the applicant fails to set forth any reasons why the differences between the claimed invention and the prior art would result in a different function. Applicant's specific terms, first encryption key comprises the matrix product of the unimodular integer matrix, of an exchange matrix and of an inverse of the unimodular integer matrix, do not set forth a resulting "different function".
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's
disclosure.
Gentry 2009-11-10 (US 20110110525) teaches: Fully Homomorphic Encryption Method Based On A Bootstrappable Encryption Scheme, Computer Program And Apparatus.
Wang 2022-8-15 (US 20230088897) teaches: A heterogeneous processing system for federated learning and privacy-preserving computation, including: a serial subsystem configured for distributing processing tasks and configuration information of processing tasks, the processing task indicating performing an operation corresponding to computing mode on one or more operands; and a parallel subsystem configured for, based on the configuration information, selectively obtaining at least one operand of the one or more operands from an intermediate result section on the parallel subsystem while obtaining remaining operand(s) of the one or more operands with respect to the at least one operand from the serial subsystem, and performing the operation on the operands obtained based on the configuration information.
Highly relevant prior art Manamohan 2020-01-27 (U.S. 20210234668) teaches performing a matrix multiplication of the encrypted first client data with the encrypted second client data [0028] The merge leader (6) downloads the local model parameters and performs a merge operation (homomorphic summation and scalar multiplication) teaches the amended limitations.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS A CARNES whose telephone number is (571)272-4378. The examiner can normally be reached Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached on (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
THOMAS A. CARNES
Examiner
Art Unit 2436
/THOMAS A CARNES/Examiner, Art Unit 2436
/SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436