Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 7-8, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Langoju et al, (US-PGPUB 20230029188) in view of Cha et al, (US-PGPUB 20190353814)
In regards to claim 1, Langoju discloses an image processing system, (100 in Fig. 1), comprising:
a trained artifact estimation network, (108 in Fig. 1), including a plurality of stages, the artifact estimation network trained to estimate artifacts in a medical image, (see at least: Par. 0027-0028, Training module 110 may include instructions …for training the one or more neural networks in a first training stage, and … for training the one or more neural networks in a second training stage, [i.e., trained artifact estimation network implicitly includes a plurality of stages, “first training stage and second training stage”]. Further, from Fig. 2A, Par. 0033, the image processing system 202 of FIG. 2, trains the noise reduction neural network to detect and reduce or optionally remove noise from a medical image, [i.e., the artifact estimation network, is trained, “training noise reduction neural network”, to estimate artifacts in a medical image, “detecting or estimating noise from a medical image”]); and
a processor, (104 in Fig. 1), communicably coupled to a non-transitory memory, (106 in Fig. 1), storing the artifact estimation network, (Par. 0027, Non-transitory memory 106 may store a neural network module 108 …), the memory including instructions that when executed, cause the processor to:
receive a medical image, (see at least: Fig. 1, Par. 0025, image processing system 102 … can receive images from the medical imaging system 100, [i.e., receive a medical image, “implicit by receiving images from the medical imaging system”]);
generate an estimated artifact image from the medical image using the trained artifact estimation network, (see at least: Fig. 2A, Par. 0023, noise reduction neural network model may be trained and deployed to output an image with less noise from an input comprising a noisy medical image; and Par. 0033, 0036, noise reduction neural network training system 200 may include a noise generator 214, which may be used to add noise to medical images, [i.e., generate an estimated artifact image from the medical image, “implicit by adding noise to the medical images to output an image with less noise”, using the trained artifact estimation network, “noise reduction neural network model);
generate an artifact-reduced image 256 (Fig. 6, input CT image 602); [i.e., generating an artifact-reduced image, “generating one or more reduced-noise images 258 by noise reduction neural network 254”]. Further, from par. 0023, the noise reduction neural network model may be trained and deployed to output an image with less noise from an input comprising a noisy medical image, [i.e., the artifact-reduced image a version of the medical image including a lesser amount of artifacts than the medical image, “implicit by output an image with less noise from an input comprising a noisy medical image”]); and
display the artifact-reduced image on a display device, (see at least: Fig. 1, Par. 0023, both structured and unstructured noise may be reduced or removed from a medical image by an image processing system, “i.e., image processing system implicitly produces an artifact-reduced image; and from and Par. 0031, the display device 134 may enable a user to view medical images produced by an medical imaging system, “i.e., implicitly displaying the artifact-reduced image, produced by the medical imaging system]);
wherein each stage of the trained artifact estimation network estimates artifacts of a different scale in the medical image, (see at least: par. 0024, In a first training stage, the noise reduction neural network model may be trained by an exemplary first stage network training system shown in FIG. 2A, and in a second training stage, the noise reduction neural network model may be trained by an exemplary second stage network training system shown in FIG. 2B; and from Par. 0061, the noise reduction neural network may include one or more convolutional layers, … comprising a plurality of weights, wherein the values of the weights are learned during a training procedure, [i.e., wherein each stage of the trained artifact estimation network estimates artifacts of a different scale in the medical image, “each stage of the noise reduction neural network implicitly includes one or more convolutional layers for estimating artifacts of different weights in the medical image”]).
Langoju does not expressly disclose generating an artifact-reduced image by subtracting the estimated artifact image from the medical image.
However, Cha et al discloses generating an artifact-reduced image by subtracting the estimated artifact image from the medical image, (see at least: Fig. 7, par. 0057, adaptive subtraction module 620 may be utilized to subtract artifact image 610 from migrated image 608, producing artifact-reduced image 61, [i.e., generating an artifact-reduced image, “producing artifact-reduced image 61”, by subtracting the estimated artifact image, “artifact image 610”, from the medical image, “image 608”]).
Langoju and Cha are combinable because they are both concerned with image-based noise reduction. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Langoju, to include the adaptive subtraction module 620, as though by Cha, in the Langoju’s noise reduction neural network, in order to produce artifact-reduced image, (Cha, Par. 0057).
In regards to claim 3, the combine teaching Langoju and Cha as whole discloses the limitations of claim 1
Langoju further discloses wherein: the artifact estimation network includes at least a first stage and a second stage, each of the first stage and the second stage including an input layer, a plurality of convolutional layers, and a stage output layer; (see at least: Figs. 2A-3B, Par. 0023-0024, the noise reduction neural network model may be trained in accordance with a multi-stage, deep learning training method, (e.g., first stage network training system shown in FIG. 2A, and second stage network training system shown in FIG. 2B), [i.e., the artifact estimation network includes at least a first stage, (Fig. 2A) and a second stage, (Fig. 2B)]. Further, Par. 0062, the noise reduction neural network maps the input image to an output image by propagating the input image from the input layer, through one or more hidden layers, until reaching an output layer of the noise reduction neural network, [i.e., each of the first stage and the second stage implicitly including an input layer, a plurality of convolutional layers, and a stage output layer]),
the first stage estimates and reduces local artifact data from the medical image, (see at least: Fig. 5b, par. 0047-0048, output CT image 530 of the partially trained noise reduction neural network 222, where some unstructured noise has been reduced or removed from the first CT image 500, [i.e., the first stage estimates and reduces local artifact data from the medical image, “implicit by estimating and reducing unstructured noise in CT image 500”]), and
the second stage estimates and reduces global artifact data from the medical image, (see at least: Fig. 5c, par. 0049, output CT image 560 of the trained noise reduction neural network 254, where both the unstructured noise and the structured noise are reduced with respect to the first CT image 500 and the second CT image 530, [i.e., the second stage estimates and reduces global artifact data from the medical image, “both the unstructured noise and the structured noise are reduced in the CT image 500”]).
In regards to claim 4, the combine teaching Langoju and Cha as whole discloses the limitations of claim 3.
Langoju further discloses wherein the local artifact data includes noise and ringing artifacts, (see at least: Par. 0023, implicit by unstructured noise), and the global artifact data includes streaking artifacts and motion artifacts, (see at least: par. 0023, implicit by both the unstructured noise and the structured noise).
In regards to claim 7, the combine teaching Langoju and Cha as whole discloses the limitations of claim 3.
Langoju further discloses that the plurality of convolutional layers of each of the first stage and second stage include a plurality of nodes, (see at least: Figs. 1a-ab, where the first training stage system 200, and the second training stage 250 system of the noise reduction neural network implicitly includes plurality of nodes).
The combine teaching Langoju and Cha as whole does not expressly disclose wherein nodes of the plurality of convolutional layers of the second stage are configured to have a larger reception field than nodes of the plurality of convolutional layers of the first stage.
At the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have the nodes of the plurality of convolutional layers of the second stage with a larger reception field than nodes of the plurality of convolutional layers of the first stage. Applicant has not disclosed that having nodes of the second stage with larger reception field comparing to the nodes of the first stage, provides an advantage, be used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant’s invention to perform equally well with either the noise reduction neural network with the nodes of the plurality of convolutional layers of the first and second stages, as though by Langoju, or the nodes of the plurality of convolutional layers of the second stage being configured to have a larger reception field than nodes of the plurality of convolutional layers of the first stage, because both methods perform the same function of removing noise from medical images, (Langoju, Par. 0001)
In regards to claim 8, the combine teaching Langoju and Cha as whole discloses the limitations of claim 3.
Langoju further discloses wherein further instructions are stored in the memory that when executed, cause the processor to:
during training of the artifact estimation network: input a noisy medical image into the artifact estimation network, the noisy medical image a combination of a high-quality medical image and one or more artifact images, (see at least: Fig. 2a, input first training dataset, “a noisy medical image”, into the training module 204, where the first training dataset is implicitly combination of images 212, “high-quality medical image”, and images 216, 218, “one or more artifact images);
backpropagate a loss between an artifact image outputted by the artifact estimation network and the one or more synthesized artifact images, (see at least: par. 0056, performing difference, (loss function) between the output image, “artifact image”, and the target (e.g., ground truth) image of the relevant image pair, “one or more synthesized artifact images”, and difference (or loss), as determined by the loss function, may be back-propagated through the neural learning network, “i.e., backpropagate a loss, “difference”, between an artifact image, “output image”, outputted by the artifact estimation network and the one or more synthesized artifact images, “target (e.g., ground truth) image of the relevant image pair”]); and
adjust parameters of both of the first stage and the second stage of the artifact estimation network based on the backpropagated loss, (see at least: Par. 0064, difference (or loss), as determined by the loss function, may be back-propagated through the neural learning network to update the weights (and biases) of the convolutional layers, [i.e., adjust parameters of both of the first stage and the second stage of the artifact estimation network, “implicit by updating the weights”, based on the backpropagated loss, “difference or loss”]).
In regards to claim 13, the combine teaching Langoju and Cha as whole discloses the limitations of claim 8
Langoju further discloses wherein the one or more artifact images are synthesized artifact image, (see at least: Fig. 2a, combining images 216, 218, “i.e., one or more artifact images are synthesized artifact image”).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Langoju and Cha et al, as applied to claim 1 above; and further in view of Teichner et al, (US-PGPUB 20210004641)
The combine teaching Langoju and Cha as whole discloses the limitations of the claim 1.
Langoju further discloses wherein each stage the artifact estimation network includes a first plurality of convolutional layers, (see at least: Par. 0061, in some embodiments, the noise reduction neural network may include one or more convolutional layer, “i.e., each stage the artifact estimation network implicitly includes a first plurality of convolutional layers”).
The combine teaching Langoju and Cha as whole does not expressly disclose that first plurality of convolutional layers being organized into a second plurality of residual blocks, and residual connections are used to bypass one or more convolutional layers within the stage.
However, Teichner et al discloses the plurality of convolutional layers being organized into a second plurality of residual blocks, and residual connections are used to bypass one or more convolutional layers within the stage, (see at least: Par. 0045, the CNN 200 uses residual blocks that include convolution layers that only use stride 1 which allows the CNN 200 to skip some residual blocks, [i.e., plurality of residual blocks implicitly enables the CNN 200 to bypass (skip) some residual blocks]).
Langoju, Cha, and Teichner are combinable because they are all concerned with machine learning based feature(s) detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Langoju and Cha, to include residual blocks, as though by Teichner, in order to enable the CNN to skip some residual blocks based on convolution layers, (Teichner, Par. 0045).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Langoju and Cha et al, as applied to claim 1 above; and further in view of Mao et al, (US-PGPUB 20200279352)
The combine teaching Langoju and Cha as whole discloses the limitations of the claim 3.
Langoju further discloses wherein the artifact-reduced image is displayed on the display device, during an examination of a subject of the medical image, (see at least: see at least: Fig. 1, Par. 0023, 0031, “see the rejection of claim 1 for more details”)
The combine teaching Langoju and Cha as whole does not expressly disclose that the artifact-reduced image is displayed in real time.
However, Mao discloses displaying the artifact-reduced image in real time, (see at least: Par. 0005, displaying the final noise-reduced image in real-time).
Langoju, Cha, and Mao are combinable because they are all concerned with machine learning based feature(s) detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Langoju and Cha, to display the final noise-reduced image in real-time, as though by Mao, in order to observe whether the final noise-reduced image preserves the quantitative information available in the original images, (Mao, par. 0049)
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Langoju et al, (US-PGPUB 20230029188) in view of Teichner et al, (US-PGPUB 20210004641)
Langoju discloses a residual neural network trained to reduce an amount of artifacts in a medical image, the residual neural network comprising a plurality of stages, (see at least: par. 0023, the noise reduction neural network model may be trained in accordance with a multi-stage to output an image with less noise), the plurality of stages including at least:
a first stage that takes as input the medical image and generates a first version of the medical image having a reduced number of artifacts of a first scale, (see at least: par. 0024, In a first training stage, the noise reduction neural network model may be trained by an exemplary first stage network training system shown in FIG. 2A. Further, as shown in Fig. 2A, the first stage takes as input the noise profile 1 image 212, “i.e., input the medical image”, and par. 0041, the partially trained noise reduction neural network 222 may be used to generate a set of images with ultra-low noise profile images 224 from the noise profile 1 images 212; and from par. 0061, the noise reduction neural network may include one or more convolutional layers, which in turn comprise one or more convolutional filters, and the convolutional filters may comprise a plurality of weights, wherein the values of the weights are learned during a training procedure, [i.e., a first stage, (200 in fig. 2), that takes as input the medical image, “images 212”, and generates a first version of the medical image having a reduced number of artifacts, “images 224”, of first scale, “a first weight implicitly learned during a training procedure at first stage”, see also, fig. 5b]); and
a second stage that takes as input the first version of the medical image, and generates a second version of the medical image having a reduced number of artifacts of both of the first scale and a second scale, (see at least. Par. 0024, in a second training stage, the noise reduction neural network model may be trained by an exemplary second stage network training system shown in FIG. 2B. Further, as shown in Fig. 2B, the second stage 250 takes as input images 224, “first version of the medical image”, and generates a reduced noise images 258, “second version of the medical image having a reduced number of artifacts”. Further, from par. 0072-0073, the training partially trained noise reduction network on the image pairs may include using a first loss function to calculate first weight adjustment ….and using a second loss function to calculate a second weight adjustment, [i.e., the generated second version of the medical image, “a reduced noise images 258”, having a reduced number of artifacts of both of the first scale and a second scale, “implicit by the first loss function and second loss function”]. See also, fig. 5c).
Langoju does not expressly disclose residual neural network.
However, Teichner et al discloses the residual neural network, (see at least: Par. 0045, the CNN 200 uses residual blocks that include convolution layers that only use stride 1 which allows the CNN 200 to skip some residual blocks, [i.e., the CNN 200 is implicitly residual neural network]).
Langoju and Teichner are combinable because they are all concerned with machine learning based feature(s) detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Langoju, to include residual blocks, as though by Teichner, in order to enable the CNN to skip some residual blocks based on convolution layers, (Teichner, Par. 0045).
Allowable Subject Matter
Claims 5-6, and 9-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With respect to claim 5, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):
“combine feature maps of each output node of the first stage to create a first stage-specific artifact image, the first stage-specific artifact image including local artifact data of the medical image, and not including image data of anatomical features of the medical image”
The relevant prior art of record, Langoju (US-PGPUB 20230029188), discloses wherein further instructions are stored in the memory that when executed, cause the processor to:
during generation of the artifact-reduced image from the medical image using the trained artifact estimation network: combine feature maps of each output node of the first stage to create a first stage-specific artifact image, (see at least: par. 0038-0041, each image pair of the training image pairs 206 and the test image pairs 208 may include an input image and a target image in a first input/target combination, “i.e., combine feature maps of each output node of the first stage”, to obtain image pairs comprising input images of one noise profile and target images of a different noise profile of the ROI of the patient, to thereby generate ultra-low noise profile images 224, “to create a first stage-specific artifact image”);
the first stage-specific artifact image including local artifact data of the medical image, and not including image data of anatomical features of the medical image, (see at least: Fig. 5b, par. 0047-0048, output CT image 530 of the partially trained noise reduction neural network 222, where some unstructured noise has been reduced or removed from the first CT image 500, [i.e., the first stage-specific artifact image implicitly including local artifact data of the medical image, “unstructured noise in CT image 500, and image data of anatomical features of the medical image, “anatomical image 530”]).
The claim language however goes beyond these details to define that the first stage-specific artifact image does not including image data of anatomical features of the medical image, which this claim limitation is viewed as allowable over the prior art.
A further prior art of record, Cha et al, (US-PGPUB 20190353814), discloses subtracting the first stage-specific artifact image from the medical image to generate a partially cleaned image, the partially cleaned image a version of the medical image where local artifacts have been reduced, (see at least: Fig. 7, par. 0057, adaptive subtraction module 620 may be utilized to subtract artifact image 610 from migrated image 608, producing artifact-reduced image 61, [i.e., generating an artifact-reduced image, “producing artifact-reduced image 61”, by subtracting the estimated artifact image, “artifact image 610”, from the medical image, “image 608”, where the artifact-reduced image 61 is implicitly the version of the medical image where local artifacts have been reduced]); but fails to teach or suggest, either alone or in combination with the other cited references, the above limitations (as combined with the other claimed limitations).
Regarding claims 6, 9-12, claims 6, 9-12 are also in condition for allowance based at least in their dependency from claim 5.
The following is a statement of reasons for the indication of allowable subject matter:
-- Claim 15 is allowable over the prior art of record
-- Claims 16-19 are allowable in view of their dependency from claim 15.
With respect to claim 15, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):
“combining the first set of estimated local artifact images with the second set of estimated global artifact images, to create a combined artifact image; subtracting the second set of estimated global artifact images from the partially cleaned image to generate an artifact-reduced image, the artifact-reduced image including a reduced amount of artifacts of both a local scale and a global scale; backpropagating a loss between the combined artifact image and the plurality of ground truth, target artifact images of the training image pair through a second plurality of convolutional layers of the second stage and a first plurality of convolutional layers of the first stage; and adjusting both of a first set of parameters at a first plurality of nodes of the first plurality of convolutional layers and a second set of parameters at a second plurality of nodes of the second plurality of convolutional layers based on the backpropagated loss”.
Langoju (US-PGPUB 20230029188), discloses a method for training a
neural network to reduce an amount of artifacts in a medical image, the method comprising:
receiving a set of training image pairs, each training image pair including a plurality of ground truth, target artifact images, and a noisy medical image comprising a high-quality medical image combined with the plurality of ground truth, target artifact images as an input image, (see at least: Fig. 2a, par. 0035-0038, implicitly receiving, by the noise reduction neural network 202, a number of training image pairs 206 and test image pairs 208, where the training image pairs 206 and test image pairs 208 are generated from dataset generator 210, which the dataset generator 210 may pair images of the noise profile 2 images 216 with corresponding images of the noise profile 3 images 218, [i.e., i.e., receiving set of training image pairs, “number of training image pairs 206 and test image pairs 208”, each training image pair including a plurality of ground truth, target artifact images, “each of the training image pairs 206 and test image pairs 208 implicitly includes plurality of ground truth, target artifact images, “noise profile 2 images 216 and noise profile 3 images 218”]. Further, the training image pairs 206 and the test image pairs 208 are both generated from a parent set of images 212 noise profile 1 images, which may have a relatively low amount of noise, [i.e., set of images 212 corresponds to the noisy medical image comprising a high-quality medical image combined with the plurality of ground truth]);
inputting an input image of a training image pair of the set of training image pairs into a first stage of the residual neural network, (see at least: Fig. 2a, par. 0033-0034, training module 204, (first stage), includes a first training dataset comprising a plurality of training pairs of data, such as image pairs divided into training image pairs 206 and test image pairs 208, which the plurality of training pairs of data, is input the noise reduction neural network 202, as shown in Fig. 2a);
estimating, at the first stage of the residual neural network, a first set of local artifact images of the plurality of ground truth, target artifact images, (see at least: Fig. 5b, par. 0047-0048, output CT image 530 of the partially trained noise reduction neural network 222, where some unstructured noise has been reduced or removed from the first CT image 500, [i.e., the first stage estimates and reduces local artifact data from the medical image, “implicit by estimating and reducing unstructured noise in CT image 500”]);
inputting the partially cleaned image into a second stage of the residual neural network, and estimating, at the second stage of the residual neural network, a second set of global artifact images of the plurality of ground truth, target artifact images, (see at least: Fig. 5c, par. 0049, output CT image 560 of the trained noise reduction neural network 254, where both the unstructured noise and the structured noise are reduced with respect to the first CT image 500 and the second CT image 530, [i.e., the second stage estimates and reduces global artifact data from the medical image, “both the unstructured noise and the structured noise are reduced in the CT image 500”]).
However, Langoju fails to teach or suggest, either alone or in combination with the other cited references, combining the first set of estimated local artifact images with the second set of estimated global artifact images, to create a combined artifact image; subtracting the second set of estimated global artifact images from the partially cleaned image to generate an artifact-reduced image, the artifact-reduced image including a reduced amount of artifacts of both a local scale and a global scale ….”
Cha et al, (US-PGPUB 20190353814), discloses subtracting the first set of
estimated local artifact images from the noisy medical image to generate a partially cleaned image, the partially cleaned image including a reduced amount of artifacts of a local scale, (see at least: Fig. 7, par. 0057, adaptive subtraction module 620 may be utilized to subtract artifact image 610 from migrated image 608, producing artifact-reduced image 61; but fails to teach or suggest, either alone or in combination with the other cited references, the above limitations (as combined with the other claimed limitations).
An additional prior art of record, Yamada (US-PGPUB 20250104291) discloses
training neural network model, which the neural network model includes global noise prediction unit and a local noise prediction unit, (par. 0117), and composing the predicted global noise and the final local noise into a combined noise using the layout information, and denoising the initial image representation using the combined noise to provide reduced noise image, (par. 0009). However, while disclosing composing the predicted global noise and the final local noise into a combined noise; Yamada but fails to teach or suggest, either alone or in combination with the other cited references, the combining the first set of estimated local artifact images with the second set of estimated global artifact images, to create a combined artifact image; subtracting the second set of estimated global artifact images from the partially cleaned image to generate an artifact-reduced image, the artifact-reduced image including a reduced amount of artifacts of both a local scale and a global scale ….”.
Teichner et al (US-PGPUB 20210004641), discloses the residual neural network,
(see at least: Par. 0045, the CNN 200 uses residual blocks that include convolution layers that only use stride 1 which allows the CNN 200 to skip some residual blocks, [i.e., the CNN 200 is implicitly residual neural network]); fails to teach or suggest, either alone or in combination with the other cited references, the above limitations (as combined with the other claimed limitations).
Regarding claims 16-19, claims 16-19 are also allowable based at least on their dependency from claim 15.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMARA ABDI/Primary Examiner, Art Unit 2668 03/06/2026