DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Amendment filed 2 October 2025 (hereinafter “the Amendment”) has been entered and considered. Claims 21, 24-25, 35, 38-39, 41, and 44-46. Claims 21-27 and 35-47, all the claims pending in the application, are rejected. All new grounds of rejection set forth in the present action were necessitated by Applicant’s claim amendments; accordingly, this action is made final.
Response to Amendment
Claim Objections
In view of the amendments to claims 35 and 46, the claim objections are withdrawn.
Claim Rejections - 35 USC § 112
In view of the amendments to claims 24, 38, and 44, the claim rejections under 35 USC 112 are withdrawn.
Prior Art Rejections
Independent claims 21, 35, and 41 have been amended to recite, in some variation
applying, by the computing system, a clustering model to the plurality of embedding representations, wherein the clustering model comprises a feature space defining a plurality of centroids, wherein each centroid corresponds to one condition of the plurality of conditions and to specify that the classifying is performed based on proximity of the plurality of embedding representations to the plurality of centroids within the feature space. On pages 11-13 of the Amendment, Applicant contends that the applied references do not teach or suggest the newly added features of the independent claims. In support of this assertion, Applicant cites page 4 of Mao and argues that Mao teaches classifying lung nodule images based on mapping to a histogram and not “based on proximity of the plurality of embedding representations to the plurality of centroids within the feature space”, as claimed. The Examiner respectfully submits that Mao does indeed teach the newly added features of the independent claims.
As Applicant acknowledges, Mao discloses classifying lung nodule images based on mapping to a histogram. In the portions cited by the Applicant on page 12 of the Amendment, Mao additionally discloses that “to get the histogram representation h(x) of an image x, all local patch feature vectors of x are mapped onto the cluster center of the visual vocabulary, and each local feature is assigned with the label of its closest cluster center using Euclidean distance in feature space” (Section 5; emphasis added; see also Figure 5).
Here, Mao discloses that the clustering model comprises a “feature space” within which feature vectors (corresponding to the claimed embedding representations) are mapped to a “closest cluster center” based on a “distance in feature space” (Section 5; emphasis added; see also Figure 5). While the classifying is based on the histogram, the histogram is created based on this cluster center mapping process.
In view of the foregoing, the Examiner maintains that Mao teaches the newly added features of the independent claims. Accordingly, the prior art rejections are maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 21-22, 25-27, 35-36, 39-42, and 45-47 are rejected under 35 U.S.C. 103 as being unpatentable over “Feature Representation Using Deep Autoencoder for Lung Nodule Image Classification” by Mao et al. (cited in parent U.S. Patent Application No. 16/810,513 and in IDS filed 3/27/23 in the present application; hereinafter “Mao”) in view of U.S. Patent Application Publication No. 2020/0242756 to Madabhushi et al. (hereinafter “Madabhushi”) and further in view of “Unsupervised Clustering of Quantitative Image Phenotypes Reveals Breast Cancer Subtypes with Distinct Prognoses and Molecular Pathways” by Wu et al. (hereinafter “Wu”).
As to independent claim 21, Mao discloses a method of classifying biomedical images, (Abstract discloses that Mao is directed to “lung nodule image classification”) comprising: identifying, by a computing system, a first biomedical image of a first tissue section having a first region of interest; generating, by the computing system, a first plurality of tiles from a portion of the first biomedical image (Sections 3-4 discloses that “lung nodule image samples are used as input” and more particularly, “decomposing a lung image into a plurality of small patches”, wherein the lung nodule image comprises lung tissues (first tissue section) having at least one nodule (first region of interest); see Fig. 3 which shows that the patches are tiles); feeding, by the computing system, the first plurality of tiles to an image reconstruction system (Sections 3-4 disclose that the “local patch set” is input to a deep autoencoder which extracts “local features”; see Fig. 4) comprising: an encoder block having a first set of weights to output a plurality of embedding representations (Section 4 and Fig. 4 show that the autoencoder comprises an “encoder” comprising input layer L1 and a hidden layer L2 “which converts an input image x into feature vector a” corresponding to one of the claimed embedding representations; by processing each of the patches, a plurality of embedding representations/feature vectors are obtained from hidden layer L3; Section 4 further discloses weight matrix We of the encoder), wherein the encoder block is established using a (i) second biomedical image derived from a second tissue section having a second region of interest (Section 4 discloses that “the network can be trained in a fine-tuning stage by minimizing the equation (4)” using N training images, one of which corresponds to the claimed second biomedical image of lung tissue (second tissue section) having at least one nodule (second region of interest)); a decoder block having a second set of weights to generate a plurality of reconstructed tiles corresponding to the plurality of embedding representations (Section 4 and Fig. 4 discloses that the autoencoder also comprises a “decoder” comprising hidden layer L4 and output layer L5 which converts each feature vector a into a corresponding “reconstructed image patch”, wherein the decoder comprises weights in weight matrix Wd); applying, by the computing system, a clustering model to the plurality of embedding representations, wherein the clustering model comprises a feature space defining a plurality of centroids, wherein each centroid corresponds to one condition of a plurality of conditions (Section 5 and Figure 5 discloses a clustering model comprising a “feature space” within which the feature vectors are mapped to a “closest cluster center” based on a “distance in feature space”, each cluster center forming an entry in a “visual vocabulary” which is a visual condition indicative of the nodule identified in the subsequent classifying step); classifying, by the computing system, the first biomedical image based on proximity of the plurality of embedding representations to the plurality of centroids within the feature space (Sections 5-6 discloses that “each lung nodule image can be represented globally by a histogram of visual words” formed by the above-described cluster center mapping process based on a “distance in feature space” between the feature vectors and cluster centers, wherein the “global representation of lung nodule image” is used to classify the lung nodule image using a trained “nodule type classifier”; see Fig. 5).
Mao discloses that the patches are extracted in a manner in which “important tissues can be picked up and unrelated ones can be get rid of” (Section 4.1). That is, Mao contemplates the need to extract only the areas of the image that are of interest, but does not expressly disclose satisfying that need by annotation. Accordingly, Mao does not expressly disclose that the encoder uses (ii) an annotation identifying the second region of interest as associated with one of a plurality of conditions or that the method include a step of storing, by the computing system, an output identifying the classification of the first biomedical image. Additionally, Mao does not expressly disclose that the images are classified into subtypes of cancer, such that the first biomedical image is classified as a subtype of cancer which is output and stored.
Madabhushi, like Mao, is directed to classifying medical images of tumors (Abstract). Madabhushi discloses that the image patches 311 extracted from the medical image for input to the classifier are breast cancer tumor regions 310 which have been annotated by an expert breast pathologist ([0040-0042] and Fig. 3). That is, Madabhushi discloses the use of (ii) an annotation identifying the second region of interest as associated with one of a plurality of conditions (patches from training images are labeled as a cancer tumor or not). Madabhushi further discloses that the classification output is stored ([0017, 0047, 0059]). That is, Madabhushi further discloses storing, by the computing system, an output identifying the classification of the first biomedical image.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao to have an expert pathologist annotate the cancer tumor regions for extraction and to store the classifier output, as taught by Madabhushi, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Mao’s training patch extraction as modified by Madabhushi’s annotation of cancer tumor regions to aid in patch extraction can yield a predictable result of extracting the patches of highest interest since both references seek to do the same (Section 4.1 of Mao and [0041] of Madabhushi). Indeed, if the image patches have already been annotated by an expert pathologist as containing a tumor/nodule or not, as taught by Madabhushi, then Mao’s system would not need to perform the Superpixel algorithm for patch generation, thus saving time and computational resources at run-time. Moreover, Mao’s classification output as modified by Madabhushi’s storing of a classification output can yield a predictable result of preserving the ability to recall the output at a later time.
Wu, like Mao, is directed to “unsupervised clustering of quantitative imaging features” for analyzing cancer images (Abstract and “Materials and Methods” section). Specifically, Wu discloses that the authors “aim to discover novel breast cancer subtypes” using the clustering method, wherein the “final clusters identified as such correspond to imaging subtypes of breast cancer” (“Introduction” and “Materials and Methods” sections and Fig. 2). That is, Wu discloses that the images are classified into subtypes of cancer using the clustering algorithm, such that the first biomedical image is classified as a subtype of cancer which is output.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the proposed combination of Mao and Madabhushi to use the results of the clustered image embeddings to classify the images into novel subtypes of cancer, as taught by Wu, rather than known types, as contemplated by Mao, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have “provide[d] complementary prognostic information” by virtue of identifying “imaging subtypes [that] were distinct from established…subtypes” (“Discussion” section of Wu).
As to claim 22, Mao as modified above does not expressly disclose generating, by the computing system, a survival function over time for a subject from which the first tissue section is obtained, based on the classification of the first biomedical image to the subtype of cancer.
Madabhushi, like Mao, is directed to classifying medical images of tumors (Abstract). Madabhushi discloses that image patches 311 extracted from the medical image for input to the classifier are breast cancer tumor regions 310 which have been annotated by an expert breast pathologist ([0041] and Fig. 3). Next, a convolutional neural network identifies epithelial pixels in the annotated tumor regions (analogous to Mao’s classification; see [0021, 0041] of Madabhushi and 340 of Fig. 3). An orientation co-occurrence matrix 350 is generated followed by extraction of the AFOD-TS feature 360 from the co-occurrence matrix. Based thereon, Madabhushi discloses that a survival function over time is calculated for the patient (Fig. 5 and [0043]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao to generate a survival function over time for the patient from which the specimen was collected based on the classification of pixels, as taught by Madabhushi, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Mao’s classification as modified by Madabhushi’s generation of a survival function over time subsequent to pixel classification can yield a predictable result of accurate patient prognosis ([0040] of Madabhushi). This would provide the patient more information to make decisions regarding quality of life, for example. Thus, a person of ordinary skill would have appreciated including in Mao’s classification algorithm the ability to subsequently predict survival over time of the patient since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As to claim 25, Mao as modified above further teaches that each condition of the plurality of conditions specifies a classification for a region of interest (Section 5 of Mao discloses that the “visual vocabulary is first constructed based on clustering all local patch descriptors (local feature representation)”, wherein each local patch is a region of interest, and wherein the entries of the visual vocabulary are visual conditions indicative of the nodule identified in the subsequent classifying step).
As to claim 26, Mao as modified above further teaches wherein identifying the first biomedical image further comprises receiving, via an imaging device, the first biomedical image corresponding to at least one tile of the first tissue section (Sections 3-4 discloses that “lung nodule image samples are used as input” and more particularly, “decomposing a lung image into a plurality of small patches”, wherein the lung nodule image comprises lung tissues (first tissue section) having at least one nodule (first region of interest); see Fig. 3 which shows that the patches are tiles and Section 1 and Fig. 1 discloses that the images are CT images necessarily received from a CT imaging device). Mao as modified above does not expressly disclose that the image from which the patches are extracted is a whole slide image (WSI) or that the tissue sample is stained for histopathological analysis.
However, Madabhushi discloses that the images are stained whole slide images ([0016-0020]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao such that the images are WSIs stained for histopathological analysis, as taught by Madabhushi, to arrive at the claimed invention discussed above. It is predictable that doing so would have allowed for the cancer classification of Mao to be applied to different types of cancer such as “breast cancer”, as taught by Madabhushi ([0016]).
As to claim 27, Mao as modified above further teaches that the clustering model comprises a feature space defining a plurality of regions (Fig. 5 of Mao shows different regions of a feature space that are separated into clusters), each of the plurality of regions corresponding to at least one of a cellular morphology or a structural morphology for a respective subtype (Section 1 of Mao discloses that the clustered features are reflective of the different nodule types W, V, J, and P; Fig. 1 shows the different structural shapes (structural morphology) of the different nodules which are reflected in the clustered feature space) of cancer (“Introduction” and “Materials and Methods” sections and Fig. 2 of Wu discloses the “final clusters identified as such correspond to imaging subtypes of breast cancer”; the reasons for combining the reference are the same as those discussed above in conjunction with claim 1).
Independent claim 35 recites a system for classifying biomedical images, comprising: one or more processors coupled with memory ([0047] of Madabhushi discloses processor 710 coupled with memory 720 for performing the algorithm), configured to perform the steps recited in the method of independent claim 21. Accordingly, claim 35 is rejected for the reasons discussed above in conjunction with claim 21 and for the additional reason that it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao to perform the disclosed algorithm using a processor and memory, as taught by Madabhushi, to arrive at the claimed invention discussed above. It is predictable that the proposed modification would have allowed the algorithm to be replicated by multiple users by virtue of embodying the steps of the algorithm in software.
Claims 36, 39 and 40 recite features nearly identical to those recited in claims 22, 25-26, respectively. Accordingly, claims 36, 39-40 are rejected for the same reasons as those discussed above in conjunction with claims 22, 25-26, respectively.
Independent claim 41 recites a non-transitory computer readable medium storing instructions that, when executed by a processor, cause the processor to perform a method ([0037] of Madabhushi discloses that the “method may be implemented as computer executable instructions” stored on “ a computer-readable device”) comprising the steps recited in the method of independent claim 21. Accordingly, claim 41 is rejected for the reasons discussed above in conjunction with claim 21 and for the additional reason that it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao to store instructions for performing the disclosed algorithm on a computer-readable medium, as taught by Madabhushi, to arrive at the claimed invention discussed above. It is predictable that the proposed modification would have allowed the algorithm to be replicated by multiple users by virtue of embodying the steps of the algorithm in software.
Claims 42 and 45-47 recite features nearly identical to those recited in claims 22 and 25-27, respectively. Accordingly, claims 42 and 45-47 are rejected for the same reasons as those discussed above in conjunction with claims 22 and 25-27, respectively.
Claims 23-24, 37-38, and 43-44 are rejected under 35 U.S.C. 103 as being unpatentable over Mao in view of Madabhushi and Wu and further in view of U.S. Patent Application Publication No. 2019/0371450 to Lou et al. (hereinafter “Lou”).
As to claim 23, Mao does not expressly disclose determining, by the computing system, a survival probability for a subject from which the first tissue section is obtained based on the classification.
Madabhushi, like Mao, is directed to classifying medical images of tumors (Abstract). Madabhushi discloses that image patches 311 extracted from the medical image for input to the classifier are breast cancer tumor regions 310 which have been annotated by an expert breast pathologist ([0041] and Fig. 3). Next, a convolutional neural network identifies epithelial pixels in the annotated tumor regions (analogous to Mao’s classification; see [0021, 0041] of Madabhushi and 340 of Fig. 3). An orientation co-occurrence matrix 350 is generated followed by extraction of the AFOD-TS feature 360 from the co-occurrence matrix. Madabhushi discloses that a prognosis of “unlikely” to experience recurrence or “likely” to experience recurrence is determined for the patient based on classification of epithelial regions in a tumor and a survival function over time is further calculated (Fig. 5 and [0043-0044, 0072]). That is, Madabhushi discloses determining, by the computing system, a survival function for a subject from which the first tissue section is obtained based on the classification (Fig. 5 and [0043-0044, 0072]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao to generate a survival function over time for the patient from which the specimen was collected based on the classification of pixels, as taught by Madabhushi, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Mao’s classification as modified by Madabhushi’s survival function over time generation subsequent to pixel classification can yield a predictable result of accurate patient prognosis ([0040] of Madabhushi). Thus, a person of ordinary skill would have appreciated including in Mao’s classification algorithm the ability to subsequently predict survival over time of the patient since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The proposed combination of Mao, Madabhushi and Wu does not expressly disclose that the survival function is a survival probability. However, Lou discloses that rather than a binary prediction (such as Madabhushi’s “likely” or “unlikely” prediction), survival may be predicted as “probability of survival as a function of time” ([0150]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Madabhushi’s survival function to include a probability of survival, as taught by Lou, to arrive at the claimed invention discussed above. Such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Madabhushi’s survival function and Lou’s probability of survival as a function of time perform the same general and predictable function, the predictable function being survival prediction over time. Indeed, Lou’s probability value carries more useful information than Madabhushi’s binary decision. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of Madabhushi’s survival function by replacing it with Lou’s probability of survival as a function of time. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
As to claim 24, Mao does not expressly disclose assigning, by the computing system, the subject from which the first tissue section is obtained into one of a plurality of risk stratification groups, based on the survival probability determined using the classification.
Madabhushi, like Mao, is directed to classifying medical images of tumors (Abstract). Madabhushi discloses that image patches 311 extracted from the medical image for input to the classifier are breast cancer tumor regions 310 which have been annotated by an expert breast pathologist ([0041] and Fig. 3). Next, a convolutional neural network identifies epithelial pixels in the annotated tumor regions (analogous to Mao’s classification; see [0021, 0041] of Madabhushi and 340 of Fig. 3). An orientation co-occurrence matrix 350 is generated followed by extraction of the AFOD-TS feature 360 from the co-occurrence matrix. Madabhushi discloses that a prognosis of “unlikely” to experience recurrence or “likely” to experience recurrence is determined for the patient based on classification of epithelial regions in a tumor and a survival function over time is further calculated, the survival prediction including “risk groups [that] stratified the patients into high-risk-recurrence and low-risk-recurrence groups” (Fig. 5 and [0043-0044, 0072]). That is, Madabhushi discloses assigning, by the computing system, a subject from which the first tissue section is obtained into one of a plurality of risk stratification groups, based on the survival (Fig. 5 and [0043-0044, 0072]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao to generate a survival function over time and an assignment of the patient into a high-risk-recurrence or low-risk-recurrence group based on the classification of pixels, as taught by Madabhushi, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Mao’s classification as modified by Madabhushi’s survival function over time generation and risk stratification subsequent to pixel classification can yield a predictable result of accurate patient prognosis ([0040] of Madabhushi). Thus, a person of ordinary skill would have appreciated including in Mao’s classification algorithm the ability to subsequently predict survival over time of the patient since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The proposed combination of Mao, Madabhushi and Wu does not expressly disclose that the survival function is a survival probability. However, Lou discloses that rather than a binary prediction (such as Madabhushi’s “likely” or “unlikely” prediction), survival may be predicted as “probability of survival as a function of time” ([0150]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Madabhushi’s survival function to include a probability of survival, as taught by Lou, to arrive at the claimed invention discussed above. Such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Madabhushi’s survival function and Lou’s probability of survival as a function of time perform the same general and predictable function, the predictable function being survival prediction over time. Further, Lou’s probability value carries more useful information than Madabhushi’s binary decision. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of Madabhushi’s survival function by replacing it with Lou’s probability of survival as a function of time. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Claims 37-38 recite features nearly identical to those recited in claims 23-24, respectively. Accordingly, claims 37-38 are rejected for the same reasons as those discussed above in conjunction with claims 23-24, respectively.
Claims 43-44 recite features nearly identical to those recited in claims 23-24, respectively. Accordingly, claims 43-44 are rejected for the same reasons as those discussed above in conjunction with claims 23-24, respectively.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN M CONNER whose telephone number is (571)272-1486. The examiner can normally be reached 10 AM - 6 PM Monday through Friday, and some Saturday afternoons.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN M CONNER/Primary Examiner, Art Unit 2663