DETAILED ACTION
This action is in response to Applicant’s Amendment ("Response”) received on July 28, 2025 in response to the Office Action dated April 28, 2025. This action is made Final.
Claims 1-5, 7-11, 13, and 14 are pending.
Claims 1, 7, 13, and 14 are independent claims.
Claims 1-5, 7-11, 13, and 14 are rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s Response
In Applicant’s Response, Applicant amended claims 1, 7, 13, and 14, and submitted arguments against the prior art in the Office Action dated April 28, 2025.
Based on the Applicant’s amendments and remarks, the Examiner withdraws the rejection of claims 1-14 under 35 USC §101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6 and 13 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ho, Man M., Jinjia Zhou, and Gang He. "RR-DnCNN v2. 0: enhanced restoration-reconstruction deep neural network for down-sampling-based video coding." IEEE Transactions on Image Processing 30 (2021): 1702-1715 (“Ho”).
Claim 1:
Ho discloses a computing device provided with one or more processors and a memory storing one or more programs executed by the one or more processors, the computing device comprising:
a machine learning model (see Fig. 2, 3; I, C - Being powered by deep learning, CNN-based SISR has generated surprisingly well-restored results; I, E - we enhance the learning capability of our prior work RR-DnCNN; II, A - pass the captured features from restoration into reconstruction for the learning capability robustness. Our novel network architecture is called the restoration-reconstruction u-shaped deep neural network.);
wherein the machine learning model is trained to perform a task of receiving data in which a part of the original data is damaged or removed, and restoring and outputting the damaged or removed data part as a main task, and is trained to perform a task of receiving and reconstructing and outputting the received original data as an auxiliary task (see Fig. 2, 3; II, A - degradation-aware technique breaks the stage to DLR _ LR _ HR, where LR is treated as a transitional ground-truth. As an advantage, our up-sampled low-resolution and features inside the network are enhanced for reconstruction. Our network is thus more robust than other works directly synthesizing HR from DLR; II, B - lost information by video compression at low-resolution and then provides the feature-based information for reconstruction at high resolution. the restoration removes the compression artifacts from DLR by learning the residual between DLR and LR, which results in two directions: up-sampling features for reconstruction using deconvolutions and synthesizing a residual map to restore DLR to have LˆR. Subsequently, the reconstruction leverages up-sampled features from restoration to synthesize the residual between H R and up-sampled LˆR, then reconstruct up-sampled LˆR to have HˆR, as illustrated in Figure 2.),
wherein the machine learning model adjusts a ratio of the number of learning times of the main task and the auxiliary task so that the sum of an objective function of the main task and an objective function of the auxiliary task is minimized (see (see Fig. 1-3; §2A - residual learning is applied to speed up the network convergence, defined as: LˆR, Rres , Rrec = h(DLR) (1) where Rres represents the inferred residual between LR and DLR for restoration; meanwhile, Rrec represents the inferred residual between up-sampled LˆR and H R. Inside our network, DL R is restored to have LˆR as: LˆR = DL R + Rres (2) then up-sampled by deconvolution and combined to reconstruction residual Rrec to obtain the final HˆR as: HˆR = Deconvolution(LˆR) + Rrec (3); II, C - add loss weights of _ and _ to balance optimizing errors between restoration and reconstruction. The total loss function is defined as: L = _ _ Lrestoration + _ _ Lreconstruction (5) where Lrestoration minimizes the error between (LR − DLR) and Rres , while Lreconstruction minimizes the error between (H R − Deconvolution(LˆR)) and Rrec.).
Claim(s) 13:
Claim(s) 13 correspond to Claim 1, and thus, Ho discloses the limitations of claim(s) 13 as well.
Claim 2:
Ho further discloses wherein the machine learning model includes: an encoder configured to: extract a first feature vector by using data in which part of the original data is damaged or removed as input when learning the main task; and extract a second feature vector by using the original data as input when learning the auxiliary task; and a decoder configured to: output restored data based on the first feature vector input from the encoder when learning the main task; and output reconstructed data based on the second feature vector input from the encoder input when learning the auxiliary task (see Fig. 1-3; §1, B - down-sample the source video before encoding, then restore, up-sample, and reconstruct images/videos after decoding; §2, A - decoding the bitstream, the super resolution network removes compression artifacts and maps the decoded low-resolution (DLR) to its original HR at the decoding end. our up-sampled low-resolution and features inside the network are enhanced for reconstruction; §2, B - restoration compensates for the lost information by video compression at low-resolution and then provides the feature-based information for reconstruction at high-resolution. pass captured features from restoration to reconstruction and improve RR-DnCNN to a restoration-reconstruction. the restoration removes the compression artifacts from DLR by learning the residual between DLR and LR, which results in two directions: up-sampling features for reconstruction using deconvolutions and synthesizing a residual map to restore DLR to have L LˆR. Subsequently, the reconstruction leverages up-sampled features from restoration to synthesize the residual between Lˆ Lˆ H R and up-sampled LˆR, then reconstruct up-sampled LˆR to H have HˆR, as illustrated in Figure 2. transferring the captured features of each layer from restoration to reconstruction.).
Claim 3:
Ho further discloses wherein the machine learning model for the main task is expressed by Equation 1 below: XY = D (E(Y;a);) [Equation 1]; and an objective function Lrestoration for performing the main task may be expressed by Equation below: Lrestoration YVx-XtI[Equation 2] where, X: original data; Y: data in which part of original data has been damaged or removed; Y : restored data; E: neural network constituting encoder; a: weight of the neural network constituting encoder; D: neural network that constituting decoder; and f: weight of neural network constituting decoder (see Fig. 1-3; §2A - residual learning is applied to speed up the network convergence, defined as: LˆR, Rres , Rrec = h(DLR) (1) where Rres represents the inferred residual between LR and DLR for restoration; meanwhile, Rrec represents the inferred residual between up-sampled LˆR and H R. Inside our network, DL R is restored to have LˆR as: LˆR = DL R + Rres (2) then up-sampled by deconvolution and combined to reconstruction residual Rrec to obtain the final HˆR as: HˆR = Deconvolution(LˆR) + Rrec (3).).
Claim 4:
Ho further discloses wherein the machine learning model for the auxiliary task is expressed by Equation 3 below: XX = D (E(X; a) ; ,$3) [Equation 3]; and an objective function Lreconstruction for performing the auxiliary task is expressed by Equation 4 below: reconstruction X[Equation 4)X where, X : reconstructed data (see Fig. 1-3; §2A - residual learning is applied to speed up the network convergence, defined as: LˆR, Rres , Rrec = h(DLR) (1) where Rres represents the inferred residual between LR and DLR for restoration; meanwhile, Rrec represents the inferred residual between up-sampled LˆR and H R. Inside our network, DL R is restored to have LˆR as: LˆR = DL R + Rres (2) then up-sampled by deconvolution and combined to reconstruction residual Rrec to obtain the final HˆR as: HˆR = Deconvolution(LˆR) + Rrec (3).).
Claim 5:
Ho further discloses wherein optimized weights (a*, R*) of the machine learning model for performing both the main task and the auxiliary task are expressed through Equation 5 below: **.*a,pargmina,#(Lrestoration reconstruction)[Equation5) where k is weight for importance between objective function of main task and objective function of auxiliary task (see §II, C - add loss weights of _ and _ to balance optimizing errors between restoration and reconstruction. The total loss function is defined as: L = _ _ Lrestoration + _ _ Lreconstruction (5) where Lrestoration minimizes the error between (LR − DLR) and Rres , while Lreconstruction minimizes the error between (H R − Deconvolution(LˆR)) and Rrec.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7-11 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ho, and further in view of Li, Jiguo, et al. "Direct speech-to-image translation." IEEE Journal of Selected Topics in Signal Processing 14.3 (2020): 517-529.
Claim 7:
Ho teaches or suggests a computing device provided with one or more processors and memory storing one or more programs executed by one or more processors, the computing device comprising:
a machine learning model (see Fig. 2, 3; I, C - Being powered by deep learning, CNN-based SISR has generated surprisingly well-restored results; I, E - we enhance the learning capability of our prior work RR-DnCNN; II, A - pass the captured features from restoration into reconstruction for the learning capability robustness. Our novel network architecture is called the restoration-reconstruction u-shaped deep neural network.);
wherein the machine learning model is trained to perform a task of receiving a second type of data as a main task, and is trained to perform a task of receiving a second type of data and reconstructing and outputting the received second type of data as an auxiliary task (see Fig. 2, 3; II, A - degradation-aware technique breaks the stage to DLR _ LR _ HR, where LR is treated as a transitional ground-truth. As an advantage, our up-sampled low-resolution and features inside the network are enhanced for reconstruction. Our network is thus more robust than other works directly synthesizing HR from DLR; II, B - lost information by video compression at low-resolution and then provides the feature-based information for reconstruction at high resolution. the restoration removes the compression artifacts from DLR by learning the residual between DLR and LR, which results in two directions: up-sampling features for reconstruction using deconvolutions and synthesizing a residual map to restore DLR to have LˆR. Subsequently, the reconstruction leverages up-sampled features from restoration to synthesize the residual between H R and up-sampled LˆR, then reconstruct up-sampled LˆR to have HˆR, as illustrated in Figure 2.),
wherein the machine learning model adjusts a ratio of the number of learning times of the main task and the auxiliary task so that a sum of an objective function of the main task and an objective function of the auxiliary task is minimized (see (see Fig. 1-3; §2A - residual learning is applied to speed up the network convergence, defined as: LˆR, Rres , Rrec = h(DLR) (1) where Rres represents the inferred residual between LR and DLR for restoration; meanwhile, Rrec represents the inferred residual between up-sampled LˆR and H R. Inside our network, DL R is restored to have LˆR as: LˆR = DL R + Rres (2) then up-sampled by deconvolution and combined to reconstruction residual Rrec to obtain the final HˆR as: HˆR = Deconvolution(LˆR) + Rrec (3); II, C - add loss weights of _ and _ to balance optimizing errors between restoration and reconstruction. The total loss function is defined as: L = _ _ Lrestoration + _ _ Lreconstruction (5) where Lrestoration minimizes the error between (LR − DLR) and Rres , while Lreconstruction minimizes the error between (H R − Deconvolution(LˆR)) and Rrec.).
Ho does not explicitly disclose receiving a first type of data, and transforming and outputting the first type of data into a second type of data that is different from the first type as a main task, which is the same type as that output from the main task.
Li teaches or suggests a first type of data, and transforming and outputting the first type of data into a second type of data that is different from the first type as a main task, which is the same type as that output from the main task (see Fig. 1, 2; §I - illustrated in Fig. 1, given the raw speech descriptions: “this bird has a red head and a white tail”, the corresponding images can be synthesized, which means that the machine has understood the speech signal to some extent and been able to translate the semantic information in the speech signal into the image; §I, C - Based on the correlation between audio and images, such as music and instruments, human voices and face appearances, audio-to-image generation aims to generate the images paired with the input audio signals. speech-to image translation aims to capture the linguistic information in the speech signals and generate images semantically consistent with the input speech descriptions; §III - embedding feature is used to synthesize the corresponding images with semantic consistency, The diagram of the proposed algorithm is illustrated in Fig. 2.).).
Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Ho, to include a first type of data, and transforming and outputting the first type of data into a second type of data that is different from the first type as a main task, which is the same type as that output from the main task for the purpose of efficiently converting speech or text into image format using machine learning models and encoder decoder frameworks using embeddings, improving image generation models as taught by Li (Fig. 1, 2).
Claim(s) 14:
Claim(s) 14 correspond to Claim 7, and thus, Ho and Li teach or suggest the limitations of claim(s) 14 as well.
Claim 8:
Ho further teaches or suggests wherein the machine learning model includes: a first encoder that extracts a first feature vector; a second encoder that extracts a second feature vector by using a second type of data as input when learning the auxiliary task; and a decoder that outputs transformed data, and outputs reconstructed data based on the second feature vector input from the second encoder input when learning the auxiliary task (see Fig. 1-3; §1, B - down-sample the source video before encoding, then restore, up-sample, and reconstruct images/videos after decoding; §2, A - decoding the bitstream, the super resolution network removes compression artifacts and maps the decoded low-resolution (DLR) to its original HR at the decoding end. our up-sampled low-resolution and features inside the network are enhanced for reconstruction; §2, B - restoration compensates for the lost information by video compression at low-resolution and then provides the feature-based information for reconstruction at high-resolution. pass captured features from restoration to reconstruction and improve RR-DnCNN to a restoration-reconstruction. the restoration removes the compression artifacts from DLR by learning the residual between DLR and LR, which results in two directions: up-sampling features for reconstruction using deconvolutions and synthesizing a residual map to restore DLR to have L LˆR. Subsequently, the reconstruction leverages up-sampled features from restoration to synthesize the residual between Lˆ Lˆ H R and up-sampled LˆR, then reconstruct up-sampled LˆR to H have HˆR, as illustrated in Figure 2. transferring the captured features of each layer from restoration to reconstruction.).
Li teaches or suggests by using the first type of data as input when learning the main task; based on the first feature vector input from the first encoder when learning the main task (see Fig. 1, 2; §I - illustrated in Fig. 1, given the raw speech descriptions: “this bird has a red head and a white tail”, the corresponding images can be synthesized, which means that the machine has understood the speech signal to some extent and been able to translate the semantic information in the speech signal into the image; §I, C - Based on the correlation between audio and images, such as music and instruments, human voices and face appearances, audio-to-image generation aims to generate the images paired with the input audio signals. speech-to image translation aims to capture the linguistic information in the speech signals and generate images semantically consistent with the input speech descriptions; §III - embedding feature is used to synthesize the corresponding images with semantic consistency, The diagram of the proposed algorithm is illustrated in Fig. 2.).).
Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Ho, to include by using the first type of data as input when learning the main task; based on the first feature vector input from the first encoder when learning the main task for the purpose of efficiently converting speech or text into image format using machine learning models and encoder decoder frameworks using embeddings, improving image generation models as taught by Li (Fig. 1, 2).
Claim 9:
Ho further teaches or suggests wherein the machine learning model for the main task is expressed by Equation 6 below: =yB(`E1(`Y;a);[Equation 6]; and an objective function L is expressed by Equation 7 below: L=|X-i| XX [Equation 7] where, X: second type of data; Ei: neural network constituting first encoder; a: weight of neural network constituting first encoder; D: neural network constituting decoder; and R: weight of neural network constituting decoder (see Fig. 1-3; §2A - residual learning is applied to speed up the network convergence, defined as: LˆR, Rres , Rrec = h(DLR) (1) where Rres represents the inferred residual between LR and DLR for restoration; meanwhile, Rrec represents the inferred residual between up-sampled LˆR and H R. Inside our network, DL R is restored to have LˆR as: LˆR = DL R + Rres (2) then up-sampled by deconvolution and combined to reconstruction residual Rrec to obtain the final HˆR as: HˆR = Deconvolution(LˆR) + Rrec (3).).
Li further teaches or suggests transformation for performing the main task, transformation, Y: first type of data; XY: transformed data (see Fig. 1, 2; §I - illustrated in Fig. 1, given the raw speech descriptions: “this bird has a red head and a white tail”, the corresponding images can be synthesized, which means that the machine has understood the speech signal to some extent and been able to translate the semantic information in the speech signal into the image; §I, C - Based on the correlation between audio and images, such as music and instruments, human voices and face appearances, audio-to-image generation aims to generate the images paired with the input audio signals. speech-to image translation aims to capture the linguistic information in the speech signals and generate images semantically consistent with the input speech descriptions; §III - embedding feature is used to synthesize the corresponding images with semantic consistency, The diagram of the proposed algorithm is illustrated in Fig. 2.).).
Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Ho, to include transformation for performing the main task, transformation, Y: first type of data; XY: transformed data for the purpose of efficiently converting speech or text into image format using machine learning models and encoder decoder frameworks using embeddings, improving image generation models as taught by Li (Fig. 1, 2).
Claim 10:
Ho further teaches or suggests wherein the machine learning model for the auxiliary task is expressed by Equation 8 below: XX=D(E2(X;r[Equation 8]; and an objective function Lreconstruction for performing the auxiliary task is expressed by Equation 9 below: Lreconstruction=-XII [Equation 9) where E2: neural network constituting second encoder; 7: weight of neural network constituting second encoder; and X : reconstructed data (see Fig. 1-3; §2A - residual learning is applied to speed up the network convergence, defined as: LˆR, Rres , Rrec = h(DLR) (1) where Rres represents the inferred residual between LR and DLR for restoration; meanwhile, Rrec represents the inferred residual between up-sampled LˆR and H R. Inside our network, DL R is restored to have LˆR as: LˆR = DL R + Rres (2) then up-sampled by deconvolution and combined to reconstruction residual Rrec to obtain the final HˆR as: HˆR = Deconvolution(LˆR) + Rrec (3).).
Claim 11:
Ho further teaches or suggests wherein optimized weights (a*, f*, y*) of the machine learning model for performing both the main task and the auxiliary task are expressed through Equation 10 below: a*/*y* = argmin a,(L,,ns,,aon+recns,,cio)[E.quation10] where k: weight for importance between objective function of main task and objective function of auxiliary task (see §II, C - add loss weights of _ and _ to balance optimizing errors between restoration and reconstruction. The total loss function is defined as: L = _ _ Lrestoration + _ _ Lreconstruction (5) where Lrestoration minimizes the error between (LR − DLR) and Rres , while Lreconstruction minimizes the error between (H R − Deconvolution(LˆR)) and Rrec.).
Li further teaches or suggests performing the main task; Ltransformation; objective function of the main task (see Fig. 1, 2; Equations 6-9; §I - illustrated in Fig. 1, given the raw speech descriptions: “this bird has a red head and a white tail”, the corresponding images can be synthesized, which means that the machine has understood the speech signal to some extent and been able to translate the semantic information in the speech signal into the image; §I, C - Based on the correlation between audio and images, such as music and instruments, human voices and face appearances, audio-to-image generation aims to generate the images paired with the input audio signals. speech-to image translation aims to capture the linguistic information in the speech signals and generate images semantically consistent with the input speech descriptions; §III - embedding feature is used to synthesize the corresponding images with semantic consistency, The diagram of the proposed algorithm is illustrated in Fig. 2; §III, D - the objective function between image and speech encoder is defined as:.).).
Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Ho, to include performing the main task; Ltransformation; objective function of the main task for the purpose of efficiently converting speech or text into image format using machine learning models and encoder decoder frameworks using embeddings, improving image generation models as taught by Li (Fig. 1, 2).
Response to Arguments
Rejections under 35 USC §102 and §103:
Applicant argues Ho fails to disclose “wherein the machine learning model is trained to perform a task of receiving data in which a part of the original data is damaged or removed, and restoring and outputting the damaged or removed data part as a main task, and is trained to perform a task of receiving original data and reconstructing and outputting the received original data as an auxiliary task.”
The Examiner respectfully disagrees.
Ho teaches degradation-aware technique breaks the stage to DLR _ LR _ HR, where LR is treated as a transitional ground-truth. II, A. As an advantage, our up-sampled low-resolution and features inside the network are enhanced for reconstruction. Id. Our network is thus more robust than other works directly synthesizing HR from DLR. Id. Further, lost information by video compression at low-resolution and then provides the feature-based information for reconstruction at high resolution. II, B. the restoration removes the compression artifacts from DLR by learning the residual between DLR and LR, which results in two directions: up-sampling features for reconstruction using deconvolutions and synthesizing a residual map to restore DLR to have LˆR. Id. Subsequently, the reconstruction leverages up-sampled features from restoration to synthesize the residual between H R and up-sampled LˆR, then reconstruct up-sampled LˆR to have HˆR, as illustrated in Figure 2. Id. The Examiner notes Ho discloses a trained model for performing a main function and outputting data as well as a trained model for performing a secondary function and outputting data.
Applicant argues neither Ho nor Li, alone or in combination, teaches or suggests “wherein the machine learning model is trained to perform a task of receiving a fist type of data, and transforming and outputting the first type of data into a second type of data that is different from the first type of data, which is the same type as that output from the main task, and reconstructing and outputting the received second type of data as an auxiliary task.”
The Examiner respectfully disagrees.
Ho teaches degradation-aware technique breaks the stage to DLR _ LR _ HR, where LR is treated as a transitional ground-truth. II, A. As an advantage, our up-sampled low-resolution and features inside the network are enhanced for reconstruction. Id. Our network is thus more robust than other works directly synthesizing HR from DLR. Id. Further, lost information by video compression at low-resolution and then provides the feature-based information for reconstruction at high resolution. II, B. the restoration removes the compression artifacts from DLR by learning the residual between DLR and LR, which results in two directions: up-sampling features for reconstruction using deconvolutions and synthesizing a residual map to restore DLR to have LˆR. Id. Subsequently, the reconstruction leverages up-sampled features from restoration to synthesize the residual between H R and up-sampled LˆR, then reconstruct up-sampled LˆR to have HˆR, as illustrated in Figure 2. Id. The Examiner notes Ho teaches receiving a type of data as a main function and receiving a type of data and reconstructing as a secondary function. Li teaches illustrated in Fig. 1, given the raw speech descriptions: “this bird has a red head and a white tail”, the corresponding images can be synthesized, which means that the machine has understood the speech signal to some extent and been able to translate the semantic information in the speech signal into the image. §I. Further, Based on the correlation between audio and images, such as music and instruments, human voices and face appearances, audio-to-image generation aims to generate the images paired with the input audio signals. §I, c. Further, speech-to image translation aims to capture the linguistic information in the speech signals and generate images semantically consistent with the input speech descriptions. Id. Further, embedding feature is used to synthesize the corresponding images with semantic consistency, The diagram of the proposed algorithm is illustrated in Fig. 2. §III. Li teaches transforming a type data into another type of data as a main function and that will be the output of the main function.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew T McIntosh whose telephone number is (571)270-7790. The examiner can normally be reached M-Th 8:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW T MCINTOSH/Primary Examiner, Art Unit 2144