Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are canceled and claims 21-40 are presented in the case.
Priority
Applicant's claim for the benefit of a provisional application 63/326751 filed on 04/01/2022 is acknowledged.
Information Disclosure Statement
The information disclosure statement submitted on 11/30/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 24 is objected to because of the following informalities:
Claim 24, line 7 recites the phrase “beginning from the other start index” which should be “beginning from the another start index”
For the informalities above and wherever else they may occur appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21, 24-29, 32-37 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over YU (US 20230229905 A1) in view of Pani et al. (US 11803448 B1 hereinafter Pani)
As to independent claim 21, Yu teaches a computer-implemented method comprising:
training a machine learning model over multiple training steps on a deterministic training dataset that comprises a plurality of training examples, wherein each training example is associated with a unique index;[training data with shards (index) ¶18 "Training data set 102 may be divided into shards, e.g., shard 1 105, shard 2 106, shard 3 107, and shard N 108. The shards, in turn are forwarded to different nodes for processing"]
determining, from monitoring of the training, that an error has occurred after a first training step and before a second training step during the training of the machine learning model; [monitors training of workers for faults across interactions (steps) ¶55 "Node 412 and agent 415 may more closely monitor the health of worker 420 and any other associated workers in order to recognize an associated worker processing unit (e.g., worker processing unit 430) failing. For example, at every training iteration, a minibatch is processed. If one worker processing unit does not complete the iteration, the all reduce step cannot be performed, and a fault is indicated. Agent 415 may initialize and restart the failed worker processing unit based on a checkpoint state 437 stored in local memory."]
in response, resuming the training beginning from the second training step, comprising:[checkpoints allow starting not at the beginning (resuming) ¶29 ", storing a checkpoint state (e.g., current parameters) so that any failure does not require the training to start at the very beginning, only at the most recent checkpoint."]
Yu does not specifically teach computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model.
However, Pani teaches computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and [checkpoints include indicators of file and line from read sources Col. 5 ln. 30-45 "the number 10 might be written to the common checkpoint data structure (along with an indicator of the particular. file) to indicate that 10 lines have been read from that particular file"]
providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model. [resume from line/file Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training pipeline disclosed by Yu by incorporating the computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model disclosed by Pani because both techniques address the same field of data analysis and by incorporating Pani into Yu reduces delay and performance problems with accessing resources [Pani Col. 1-2 ln. 46-4]
As to dependent claim 24, Yu and Pani teach the method of claim 21 above that is incorporated, Yu and Pani further teach determining that the training should be restarted from a third training step during the training of the machine learning model; [Yu beginning plus two checkpoints and iterations (third) ¶52-54 " agent 415 may maintain two checkpoint states 437 for worker 420, correlating to the checkpoint states of the last two iterations completed by worker 420."]
in response, restarting the training from the third training step, comprising: [Yu revert (restart) ¶52-54 "revert to the previous checkpoint."]
restoring parameter values of the machine learning model that have been checkpointed at the third training step; [Yu stores parameters for restarting with them ¶29 "storing a checkpoint state (e.g., current parameters) so that any failure does not require the training to start at the very beginning, only at the most recent checkpoint."]
computing, based on the unique indices associated with the plurality of training examples, another start index of a training example that was processed by the machine learning model at a beginning of the third training step; and [Pani checkpoints set ndicators of file and line from read sources Col. 5 ln. 30-45 "the number 10 might be written to the common checkpoint data structure (along with an indicator of the particular. file) to indicate that 10 lines have been read from that particular file"]
providing training examples that are associated with unique indices beginning from the other start index for processing by the machine learning model in accordance with the restored parameter values. [Pani resume from line/file Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to dependent claim 25, Yu and Pani teach the method of claim 24 above that is incorporated, Yu and Pani further teach wherein determining that the training should be restarted from a third training step comprises: receiving a user request to restart the training from the third training step. [Pani user option for checkpoints Claim 14 " receiving user input specifying a time period through a user-configurable option or through an application programming interface; using the received time period as the time interval for checkpointing, by the respective task node, the one or more data source progress points into the common checkpoint data structure."]
As to dependent claim 26, Yu and Pani teach the method of claim 21 above that is incorporated, Yu and Pani further teach wherein the unique indices associated with the plurality of training examples comprise monotonically increasing integer values. [Pani lines increase and are tied to source data Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to dependent claim 27, Yu and Pani teach the method of claim 26 above that is incorporated, Yu and Pani further teach wherein providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model comprises: providing the training examples in an order of the unique indices associated with the training examples for use in training the machine learning model. [Pani provides data in order Col. 18 ln. 5-18 "a time ordered stack or log, for example, then the restarted system (or portion of the system) might have to start analyzing the end of the log (or stack), and analyze every entry from the end, until at least 1 entry for every corresponding data source is found in the common checkpoint data structure, in some embodiments."]
As to dependent claim 28, Yu and Pani teach the method of claim 26 above that is incorporated, Yu and Pani further teach wherein providing the training examples that are associated with the unique indices beginning from the start index for processing by the machine learning model comprises: skipping training examples included in the deterministic training dataset that are associated with unique indices smaller than the start index. [Pani skip first 10 lines Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to independent claim 29, Yu teaches a system comprising one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: [system with cpu, storage and instructions ¶92-94]
training a machine learning model over multiple training steps on a deterministic training dataset that comprises a plurality of training examples, wherein each training example is associated with a unique index; [training data with shards (index) ¶18 "Training data set 102 may be divided into shards, e.g., shard 1 105, shard 2 106, shard 3 107, and shard N 108. The shards, in turn are forwarded to different nodes for processing"]
determining, from monitoring of the training, that an error has occurred after a first training step and before a second training step during the training of the machine learning model; [monitors training of workers for faults across interactions (steps) ¶55 "Node 412 and agent 415 may more closely monitor the health of worker 420 and any other associated workers in order to recognize an associated worker processing unit (e.g., worker processing unit 430) failing. For example, at every training iteration, a minibatch is processed. If one worker processing unit does not complete the iteration, the all reduce step cannot be performed, and a fault is indicated. Agent 415 may initialize and restart the failed worker processing unit based on a checkpoint state 437 stored in local memory."]
in response, resuming the training beginning from the second training step, comprising:[checkpoints allow starting not at the beginning (resuming) ¶29 ", storing a checkpoint state (e.g., current parameters) so that any failure does not require the training to start at the very beginning, only at the most recent checkpoint."]
Yu does not specifically teach computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model.
However, Pani teaches computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and [checkpoints include indicators of file and line from read sources Col. 5 ln. 30-45 "the number 10 might be written to the common checkpoint data structure (along with an indicator of the particular. file) to indicate that 10 lines have been read from that particular file"]
providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model. [resume from line/file Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the training pipeline disclosed by Yu by incorporating the computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model disclosed by Pani because both techniques address the same field of data analysis and by incorporating Pani into Yu reduces delay and performance problems with accessing resources [Pani Col. 1-2 ln. 46-4]
As to dependent claim 32, Yu and Pani teach the method of claim 29 above that is incorporated, Yu and Pani further teach determining that the training should be restarted from a third training step during the training of the machine learning model; [Yu beginning plus two checkpoints and iterations (third) ¶52-54 " agent 415 may maintain two checkpoint states 437 for worker 420, correlating to the checkpoint states of the last two iterations completed by worker 420."]
in response, restarting the training from the third training step, comprising: [Yu revert (restart) ¶52-54 "revert to the previous checkpoint."]
restoring parameter values of the machine learning model that have been checkpointed at the third training step; [Yu stores parameters for restarting with them ¶29 "storing a checkpoint state (e.g., current parameters) so that any failure does not require the training to start at the very beginning, only at the most recent checkpoint."]
computing, based on the unique indices associated with the plurality of training examples, another start index of a training example that was processed by the machine learning model at a beginning of the third training step; and [Pani checkpoints set ndicators of file and line from read sources Col. 5 ln. 30-45 "the number 10 might be written to the common checkpoint data structure (along with an indicator of the particular. file) to indicate that 10 lines have been read from that particular file"]
providing training examples that are associated with unique indices beginning from the other start index for processing by the machine learning model in accordance with the restored parameter values. [Pani resume from line/file Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to dependent claim 33, Yu and Pani teach the method of claim 32 above that is incorporated, Yu and Pani further teach wherein determining that the training should be restarted from a third training step comprises: receiving a user request to restart the training from the third training step. [Pani user option for checkpoints Claim 14 " receiving user input specifying a time period through a user-configurable option or through an application programming interface; using the received time period as the time interval for checkpointing, by the respective task node, the one or more data source progress points into the common checkpoint data structure."]
As to dependent claim 34, Yu and Pani teach the method of claim 29 above that is incorporated, Yu and Pani further teach wherein the unique indices associated with the plurality of training examples comprise monotonically increasing integer values. [Pani lines increase and are tied to source data Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to dependent claim 35, Yu and Pani teach the method of claim 34 above that is incorporated, Yu and Pani further teach wherein providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model comprises: providing the training examples in an order of the unique indices associated with the training examples for use in training the machine learning model. [Pani provides data in order Col. 18 ln. 5-18 "a time ordered stack or log, for example, then the restarted system (or portion of the system) might have to start analyzing the end of the log (or stack), and analyze every entry from the end, until at least 1 entry for every corresponding data source is found in the common checkpoint data structure, in some embodiments."]
As to dependent claim 36 Yu and Pani teach the method of claim 34 above that is incorporated, Yu and Pani further teach wherein providing the training examples that are associated with the unique indices beginning from the start index for processing by the machine learning model comprises: skipping training examples included in the deterministic training dataset that are associated with unique indices smaller than the start index. [Pani skip first 10 lines Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to independent claim 37, Yu teaches a computer storage medium encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations comprising: [cpu, storage and instructions ¶92-94]
training a machine learning model over multiple training steps on a deterministic training dataset that comprises a plurality of training examples, wherein each training example is associated with a unique index;[training data with shards (index) ¶18 "Training data set 102 may be divided into shards, e.g., shard 1 105, shard 2 106, shard 3 107, and shard N 108. The shards, in turn are forwarded to different nodes for processing"]
determining, from monitoring of the training, that an error has occurred after a first training step and before a second training step during the training of the machine learning model; [monitors training of workers for faults across interactions (steps) ¶55 "Node 412 and agent 415 may more closely monitor the health of worker 420 and any other associated workers in order to recognize an associated worker processing unit (e.g., worker processing unit 430) failing. For example, at every training iteration, a minibatch is processed. If one worker processing unit does not complete the iteration, the all reduce step cannot be performed, and a fault is indicated. Agent 415 may initialize and restart the failed worker processing unit based on a checkpoint state 437 stored in local memory."]
in response, resuming the training beginning from the second training step, comprising:[checkpoints allow starting not at the beginning (resuming) ¶29 ", storing a checkpoint state (e.g., current parameters) so that any failure does not require the training to start at the very beginning, only at the most recent checkpoint."]
Yu does not specifically teach computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model.
However, Pani teaches computing, based on the unique indices associated with the plurality of training examples, a start index of a training example that will be processed by the machine learning model at a beginning of the second training step; and [checkpoints include indicators of file and line from read sources Col. 5 ln. 30-45 "the number 10 might be written to the common checkpoint data structure (along with an indicator of the particular. file) to indicate that 10 lines have been read from that particular file"]
providing training examples that are associated with unique indices beginning from the start index for processing by the machine learning model. [resume from line/file Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
As to dependent claim 40, Yu and Pani teach the method of claim 37 above that is incorporated, Yu and Pani further teach determining that the training should be restarted from a third training step during the training of the machine learning model; [Yu beginning plus two checkpoints and iterations (third) ¶52-54 " agent 415 may maintain two checkpoint states 437 for worker 420, correlating to the checkpoint states of the last two iterations completed by worker 420."]
in response, restarting the training from the third training step, comprising: [Yu revert (restart) ¶52-54 "revert to the previous checkpoint."]
restoring parameter values of the machine learning model that have been checkpointed at the third training step; [Yu stores parameters for restarting with them ¶29 "storing a checkpoint state (e.g., current parameters) so that any failure does not require the training to start at the very beginning, only at the most recent checkpoint."]
computing, based on the unique indices associated with the plurality of training examples, another start index of a training example that was processed by the machine learning model at a beginning of the third training step; and [Pani checkpoints set ndicators of file and line from read sources Col. 5 ln. 30-45 "the number 10 might be written to the common checkpoint data structure (along with an indicator of the particular. file) to indicate that 10 lines have been read from that particular file"]
providing training examples that are associated with unique indices beginning from the other start index for processing by the machine learning model in accordance with the restored parameter values. [Pani resume from line/file Col. 5 ln. 30-45 "When the system (or a portion thereof) is restarted, then the system (and/or the portion of the system) can analyze the common checkpoint data structure in order to resume from line number 10 (or the line after line number 10 as the case may be)"]
Claims 22-23, 30-31 and 38-39 are rejected under 35 U.S.C. 103 as being unpatentable over Yu in view of Pani, as applied in the rejection of claim 21, 29 and 37 above, and further in view of PHAN et al. (US 20220058515 A1 hereinafter Phan)
As to dependent claim 22, Yu and Pani teach the method of claim 21 above that is incorporated,
Yu and Pani do not specifically teach wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training.
However, Phan teaches wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training. [Phan monitors training loss and outlier loss in order to minimize it ¶9 "minimizing training loss and outlier loss includes determining a linear loss model for each leaf of the ODT"]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the monitoring disclosed by Yu and Pani by incorporating the wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training disclosed by Phan because all techniques address the same field of data analysis and by incorporating Phan into Yu and Pani provides data cleanup for improved predictions and efficiency [Phan ¶14]
As to dependent claim 23, Yu and Pani teach the method of claim 21 above that is incorporated,
Yu and Pani do not specifically teach wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step.
However, Phan teaches wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step. [phan removes outliers (defective) ¶6 "During the training process one or more outliers are filtered out by a linear loss model that minimizes training loss and outlier loss."]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the monitoring disclosed by Yu and Pani by incorporating the wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step disclosed by Phan because all techniques address the same field of data analysis and by incorporating Phan into Yu and Pani provides data cleanup for improved predictions and efficiency [Phan ¶14]
As to dependent claim 30, Yu and Pani teach the method of claim 29 above that is incorporated,
Yu and Pani do not specifically teach wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training.
However, Phan teaches wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training. [Phan monitors training loss and outlier loss in order to minimize it ¶9 "minimizing training loss and outlier loss includes determining a linear loss model for each leaf of the ODT"]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the monitoring disclosed by Yu and Pani by incorporating the wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training disclosed by Phan because all techniques address the same field of data analysis and by incorporating Phan into Yu and Pani provides data cleanup for improved predictions and efficiency [Phan ¶14]
As to dependent claim 31, Yu and Pani teach the method of claim 29 above that is incorporated,
Yu and Pani do not specifically teach wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step.
However, Phan teaches wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step. [phan removes outliers (defective) ¶6 "During the training process one or more outliers are filtered out by a linear loss model that minimizes training loss and outlier loss."]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the monitoring disclosed by Yu and Pani by incorporating the wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step disclosed by Phan because all techniques address the same field of data analysis and by incorporating Phan into Yu and Pani provides data cleanup for improved predictions and efficiency [Phan ¶14]
As to dependent claim 38, Yu and Pani teach the method of claim 37 above that is incorporated,
Yu and Pani do not specifically teach wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training.
However, Phan teaches wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training. [Phan monitors training loss and outlier loss in order to minimize it ¶9 "minimizing training loss and outlier loss includes determining a linear loss model for each leaf of the ODT"]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the monitoring disclosed by Yu and Pani by incorporating the wherein the monitoring of the training comprises monitoring of a value of a loss computed using an objective function for the training disclosed by Phan because all techniques address the same field of data analysis and by incorporating Phan into Yu and Pani provides data cleanup for improved predictions and efficiency [Phan ¶14]
As to dependent claim 39, Yu and Pani teach the method of claim 37 above that is incorporated,
Yu and Pani do not specifically teach wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step.
However, Phan teaches wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step. [phan removes outliers (defective) ¶6 "During the training process one or more outliers are filtered out by a linear loss model that minimizes training loss and outlier loss."]
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the monitoring disclosed by Yu and Pani by incorporating the wherein the error comprises an error resulted from a defective training example that was processed by the machine learning model after the first training step disclosed by Phan because all techniques address the same field of data analysis and by incorporating Phan into Yu and Pani provides data cleanup for improved predictions and efficiency [Phan ¶14]
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
DIRAC et al. (US 20150379428 A1) teaches model training plans and delimited training data (see ¶94 and ¶159).
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEAU SPRATT whose telephone number is (571)272-9919. The examiner can normally be reached M-F 8:30-5 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 5712127212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BEAU D SPRATT/Primary Examiner, Art Unit 2143