Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
RESPONSE TO AMENDMENT
Applicant’s Amendments filed on February 10, 2026 has been entered and made of record.
Currently pending Claim(s)
1 and 3-8
Independent Claim(s)
1, 7 and 8
Amended Claim(s)
1 and 3-8
Canceled Claim(s)
2
RESPONSE TO ARGUMENTS
This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on February 10, 2026.
In view of amendments filed on February 10, 2026 to the title, the objection to the specification is withdrawn.
Applicant’s Reply (February 10, 2026) includes substantive amendments to the claims. This Office action has been updated with new grounds of rejection addressing those amendments. Further Applicant’s Arguments/Remarks with respect to independent claims 1, 7, and 8 have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection and the arguments are now rejected by newly cited art ‘Zeng et al. ("A simple low-dose x-ray CT simulation from high-dose scan." IEEE transactions on nuclear science 62.5 (2015): 2226-2233.)’ as explained in the body of rejection below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 7, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. ("Convolutional neural network based metal artifact reduction in x-ray computed tomography." IEEE transactions on medical imaging 37.6 (2018): 1370-1381.) (hereinafter, “Zhang”) in view of Zeng et al. ("A simple low-dose x-ray CT simulation from high-dose scan." IEEE transactions on nuclear science 62.5 (2015): 2226-2233.) (hereinafter, “Zeng”).
Regarding claim 1, Zhang discloses a medical image processing apparatus comprising: processing circuitry configured to: acquire second image data in which a low-count artifact is reduced (Introduction [section 1, page 1370 right column last full paragraph] “…convolutional neural network (CNN) has been applied to medical imaging for low dose CT reconstruction and artifacts reduction”; Introduction [section 1, page 1371 left column paragraph 1] “the information from different correction methods is captured and the merits of these methods are fused, leading to a higher quality image (i.e. second image)”), by applying a trained machine learning model to first image data that is obtained by an X-ray CT scan (Experiments [section 4 page 1376 left column first full paragraph] “scanned on a Siemens SOMATOM Sensation 16 CT scanner with 120 kVp and 496 mAs using the helical scanning geometry. (i.e. first image)”), and
output image data based on the second image data (See figure 3 [page 1373 left column last full paragraph continued to right column first full paragraph] “Fig. 3 depicts the workflow of our CNN, which is comprised of an input layer, an output layer and L = 5 convolutional layers…nl convolution kernel with a fixed size of cl × cl . Cl (u) generates new feature maps based on the (l − 1)th layer’s output. For the last layer, feature maps are used to generate an image that is close to the target (i.e. image data).”)
PNG
media_image1.png
490
1596
media_image1.png
Greyscale
Figure 3:
, wherein
the machine learning model is trained by using training data that includes third image data (Training [section 2, page 1371 right column first full paragraph] “train a convolutional neural network for MAR. First, we generate metal-free (i.e. third image data), metal-inserted … CT images to create a database.”) and fourth image data (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images (i.e. fourth image data), where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”), the third image data being reconstructed based on projection data that is obtained by an X-ray CT scan (page [1372 left column second to last paragraph] “The metal-free image (i.e. third image data) is reconstructed using filtered backprojection (FBP), and the image is assumed as reference and denoted as xref .”), the fourth image data being based on the projection data and including a generated low-count artifact (Experiments [section 4, page 1375 right column paragraph 1] “A 120 kVp x-ray source is simulated and each detector bin is expected to receive 2 × 107 photons in the case of blank scan [46]. There are 984 projection views over a rotation and 920 detector bins in a row. The distance between the x-ray source and the rotation center is 59.5 cm. The metal-free (i.e. third image data) and metal-inserted images (i.e. fourth image data) are reconstructed by the FBP from simulated sinograms and each image consists of 512 × 512 pixels.”),
the fourth image data is image data that is reconstructed after a low-count simulation process is applied to the projection data (Training [section 2, page 1371 right column paragraph 1] ”noisy polychromatic projection is obtained, and then the image containing artifacts is reconstructed.” and that includes a low count artifact that is artificially generated (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images, where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”), the fourth image data including the low-count artifact, which is artificially generated (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images (i.e. fourth image data), where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”).
However, Zhang fails to teach the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient.
Zeng teaches the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient (Methods and Material, page 2227 [left column, first paragraph] “The associative formula can be expressed as follows:
PNG
media_image2.png
50
390
media_image2.png
Greyscale
where is the measured noisy transmission datum and is the mean number of photon passing though the patient, and the magnitude of is primarily determined by the mAs value”; page 2227 [right column, first paragraph and step 4] “Based on the theoretical statistical model of CT transmission data in (1), in this paper we propose a simple low-dose CT simulation strategy…Multiply the transmission data Tnd by the simulated low-dose scan incident flux I--old,sim to produce the simulated low-dose transmission data I--old,sim, i.e., I--old,sim (s) = I--old,sim (s)Tnd (s), wherein the factor I--old,sim is determined by a relationship between the incident fluxes of low- and high-dose scans”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Zhang’s reference to include the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient taught by Zeng’s reference. The motivation for doing so would have been to provide medical professionals the ability to study the effects of lower dose on image quality as suggested by Zeng (see Zeng, page 2232 Discussion and Conclusion [right column, first paragraph]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Zeng with Zhang to obtain the invention specified in claim 1.
Regarding claim 5, which claim 1 is incorporated, Zhang discloses wherein the fourth image data is image data that is obtained by adding a low-count artifact image that is generated in advance to image data that is reconstructed from the projection data (Training [section 2, page 1371 right column paragraph 1] ”noisy polychromatic projection is obtained, and then the image containing artifacts is reconstructed.”).
Regarding claim 7, Zhang discloses a medical image processing method comprising: acquiring second image data in which a low-count artifact is reduced (Introduction [section 1, page 1370 right column last full paragraph] “…convolutional neural network (CNN) has been applied to medical imaging for low dose CT reconstruction and artifacts reduction”; Introduction [section 1, page 1371 left column paragraph 1] “the information from different correction methods is captured and the merits of these methods are fused, leading to a higher quality image (i.e. second image)”), by applying a trained machine learning model to first image data that is obtained by an X-ray CT scan (Experiments [section 4 page 1376 left column first full paragraph] “scanned on a Siemens SOMATOM Sensation 16 CT scanner with 120 kVp and 496 mAs using the helical scanning geometry. (i.e. first image)”); and
outputting image data based on the second image data (See figure 3 [page 1373 left column last full paragraph continued to right column first full paragraph] “Fig. 3 depicts the workflow of our CNN, which is comprised of an input layer, an output layer and L = 5 convolutional layers…nl convolution kernel with a fixed size of cl × cl . Cl (u) generates new feature maps based on the (l − 1)th layer’s output. For the last layer, feature maps are used to generate an image that is close to the target (i.e. image data).”)
Figure 3:
PNG
media_image1.png
490
1596
media_image1.png
Greyscale
, wherein
the machine learning model is trained by using training data that includes third image data (Training [section 2, page 1371 right column first full paragraph] “train a convolutional neural network for MAR. First, we generate metal-free (i.e. third image data), metal-inserted … CT images to create a database.”) and fourth image data (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images (i.e. fourth image data), where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”), the third image data being reconstructed based on projection data that is obtained by an X-ray CT scan (page [1372 left column second to last paragraph] “The metal-free image (i.e. third image data) is reconstructed using filtered backprojection (FBP), and the image is assumed as reference and denoted as xref .”), the fourth image data being based on the projection data and including a generated low-count artifact(Experiments [section 4, page 1375 right column paragraph 1] “A 120 kVp x-ray source is simulated and each detector bin is expected to receive 2 × 107 photons in the case of blank scan [46]. There are 984 projection views over a rotation and 920 detector bins in a row. The distance between the x-ray source and the rotation center is 59.5 cm. The metal-free (i.e. third image data) and metal-inserted images (i.e. fourth image data) are reconstructed by the FBP from simulated sinograms and each image consists of 512 × 512 pixels.”),
the fourth image data is image data that is reconstructed after a low-count simulation process is applied to the projection data (Training [section 2, page 1371 right column paragraph 1] ”noisy polychromatic projection is obtained, and then the image containing artifacts is reconstructed.” and that includes a low count artifact that is artificially generated (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images, where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”), the fourth image data including the low-count artifact, which is artificially generated (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images (i.e. fourth image data), where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”).
However, Zhang fails to teach the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient.
Zeng teaches the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient (Methods and Material, page 2227 [left column, first paragraph] “The associative formula can be expressed as follows:
PNG
media_image2.png
50
390
media_image2.png
Greyscale
where is the measured noisy transmission datum and is the mean number of photon passing though the patient, and the magnitude of is primarily determined by the mAs value”; page 2227 [right column, first paragraph and step 4] “Based on the theoretical statistical model of CT transmission data in (1), in this paper we propose a simple low-dose CT simulation strategy…Multiply the transmission data Tnd by the simulated low-dose scan incident flux I--old,sim to produce the simulated low-dose transmission data I--old,sim, i.e., I--old,sim (s) = I--old,sim (s)Tnd (s), wherein the factor I--old,sim is determined by a relationship between the incident fluxes of low- and high-dose scans”)..
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Zhang’s reference to include the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient taught by Zeng’s reference. The motivation for doing so would have been to provide medical professionals the ability to study the effects of lower dose on image quality as suggested by Zeng (see Zeng, page 2232 Discussion and Conclusion [right column, first paragraph]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Zeng with Zhang to obtain the invention specified in claim 7.
Regarding claim 8, Zhang discloses a model generation method for generating a machine learning model that acquires second image data in which a low count artifact is reduced (Introduction [section 1, page 1370 right column last full paragraph] “…convolutional neural network (CNN) has been applied to medical imaging for low dose CT reconstruction and artifacts reduction”; Introduction [section 1, page 1371 left column paragraph 1] “the information from different correction methods is captured and the merits of these methods are fused, leading to a higher quality image (i.e. second image)”), by applying a trained machine learning model to first image data that is obtained by an X-ray CT scan (Experiments [section 4 page 1376 left column first full paragraph] “scanned on a Siemens SOMATOM Sensation 16 CT scanner with 120 kVp and 496 mAs using the helical scanning geometry. (i.e. first image)”), the model generation method comprising:
generating the machine learning model by training a model that is not yet trained, by using training data that includes third image data and fourth image data (Introduction [section 1, page 1371 left column, paragraph 1] “Specifically, before the MAR, we build a MAR database to generate training data for the CNN.”; Introduction [section 1, page 1371 right column, first full paragraph] “There are two phases to train a convolutional neural network for MAR. First, we generate metal-free, metal-inserted and MAR corrected CT images to create a database. Then, a CNN is constructed and the training data is collected from the established database and used to train the CNN”), the third image data being reconstructed based on projection data that is obtained by an X-ray CT scan (page [1372 left column second to last paragraph] “The metal-free image (i.e. third image data) is reconstructed using filtered backprojection (FBP), and the image is assumed as reference and denoted as xref .”), the fourth image data being based on the projection data and including a generated low-count artifact (Experiments [section 4, page 1375 right column paragraph 1] “A 120 kVp x-ray source is simulated and each detector bin is expected to receive 2 × 107 photons in the case of blank scan [46]. There are 984 projection views over a rotation and 920 detector bins in a row. The distance between the x-ray source and the rotation center is 59.5 cm. The metal-free (i.e. third image data) and metal-inserted images (i.e. fourth image data) are reconstructed by the FBP from simulated sinograms and each image consists of 512 × 512 pixels.”),
the fourth image data is image data that is reconstructed after a low-count simulation process is applied to the projection data (Training [section 2, page 1371 right column paragraph 1] ”noisy polychromatic projection is obtained, and then the image containing artifacts is reconstructed.” and that includes a low count artifact that is artificially generated (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images, where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”), the fourth image data including the low-count artifact, which is artificially generated (Training [section 2, page 1371 right column paragraph 3] “…metal-inserted CT images (i.e. fourth image data), where beam hardening and Poisson noise are simulated. To ensure that the trained CNN works for real cases…we simulate the metal artifacts based on clinical CT images.”).
However, Zhang fails to teach the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient.
Zeng teaches the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient (Methods and Material, page 2227 [left column, first paragraph] “The associative formula can be expressed as follows:
PNG
media_image2.png
50
390
media_image2.png
Greyscale
where is the measured noisy transmission datum and is the mean number of photon passing though the patient, and the magnitude of is primarily determined by the mAs value”; page 2227 [right column, first paragraph and step 4] “Based on the theoretical statistical model of CT transmission data in (1), in this paper we propose a simple low-dose CT simulation strategy…Multiply the transmission data Tnd by the simulated low-dose scan incident flux I--old,sim to produce the simulated low-dose transmission data I--old,sim, i.e., I--old,sim (s) = I--old,sim (s)Tnd (s), wherein the factor I--old,sim is determined by a relationship between the incident fluxes of low- and high-dose scans”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Zhang’s reference to include the low-count simulation process is a process of simulating a low-dose count by multiplying a photon count of the projection data by a coefficient taught by Zeng’s reference. The motivation for doing so would have been to provide medical professionals the ability to study the effects of lower dose on image quality as suggested by Zeng (see Zeng, page 2232 Discussion and Conclusion [right column, first paragraph]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Zeng with Zhang to obtain the invention specified in claim 8.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al ("Convolutional neural network based metal artifact reduction in x-ray computed tomography." IEEE transactions on medical imaging 37.6 (2018): 1370-1381.) (hereinafter, “Zhang”) in view of Zeng et al. ("A simple low-dose x-ray CT simulation from high-dose scan." IEEE transactions on nuclear science 62.5 (2015): 2226-2233.) (hereinafter, “Zeng”), and further in view of Bueno et al (US7,215,801B2) (hereinafter, “Bueno”).
Regarding claim 3, which claim 1 is incorporated, Zhang discloses wherein the low-count simulation process includes a noise addition process (Training [section 2, page 1372 right column paragraph 1] “…metal-inserted CT images, where beam hardening and Poisson noise are simulated.”).
However, Zhang and Zeng fail to teach a zero clipping process with respect to a negative value of the projection data.
Bueno teaches a zero clipping process with respect to a negative value of the projection data (column 1 lines 45-48 “…saturation arithmetic to produce an image in integer format with any negative corrected values clipped to a value of zero.”).
Therefore, it would have been obvious for one of ordinary skill of the art before the effective filing date to modify Zhang in view of Zeng to include a zero clipping process with respect to a negative value of the projection data taught by Bueno. The motivation for doing so would have been to set a pixel saturation value that eliminates visual artifacts in the display, as suggested by Bueno (see Bueno, column 4 lines 20-30).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Bueno with Zhang and Zeng to obtain the invention specified in claim 3.
Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al ("Convolutional neural network based metal artifact reduction in x-ray computed tomography." IEEE transactions on medical imaging 37.6 (2018): 1370-1381.) (hereinafter, “Zhang”) ”) in view of Zeng et al. ("A simple low-dose x-ray CT simulation from high-dose scan." IEEE transactions on nuclear science 62.5 (2015): 2226-2233.) (hereinafter, “Zeng”), and further in view of Ziabari et al (US2022/0035961A1) (hereinafter, “Ziabari”).
Regarding claim 4, which claim 1 is incorporated, Zhang discloses wherein the processing circuitry is further configured to acquire, by the machine learning model, the second image data in which the low count artifact is reduced (Introduction [section 1, page 1370 right column last full paragraph] “…convolutional neural network (CNN) has been applied to medical imaging for low dose CT reconstruction and artifacts reduction”).
However, Zhang and Zeng fail to teach wherein the processing circuitry acquires, by the machine learning model, the second image data in which noise is reduced.
Ziabari teaches wherein the processing circuitry acquires, by the machine learning model, the second image data in which noise is reduced (paragraph [0040] “trained model can be used to rapidly process new, non-simulated data and produce reconstructions by effectively suppressing artifacts the model was trained to reduce, such as detector noise”, paragraph [0050] “… occurs in the projection or sinogram domain, the training skips reconstruction and trains directly on the projection volumes to reduce artifacts, in this case beam hardening and detector noise artifacts. Removing beam hardening artifacts from the projection or sinogram helps to avoid reliance on an initial reconstruction for image domain data can result in a higher quality reconstruction.).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Zhang in view of Zeng to include noise reduction taught by Ziabari. The motivation for doing so would have been to reduce the amount of artifact and thereby increasing the resolution of the imaging system as suggested by Ziabari (see Ziabari paragraph [0048]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results Therefore, it would have been obvious to combine Ziabari with Zhang and Zeng to obtain the invention specified in claim 4.
Regarding claim 6, which claim 1 is incorporated, Zhang and Zeng fail to teach wherein the processing circuitry is further configured to acquire processed image data in which noise is reduced, by applying a machine learning model that is trained for at least reducing noise to the second image data, and output image data based on the processed image data.
Ziabari teaches the processing circuitry is further configured to acquire processed image data in which noise is reduced, by applying a machine learning model that is trained for at least reducing noise to the second image data (paragraph [0040] “trained model can be used to rapidly process new, non-simulated data and produce reconstructions by effectively suppressing artifacts the model was trained to reduce, such as detector noise”), and output image data based on the processed image data (paragraph [0042] “output of a conventional reconstruction algorithm or in the form of a complete reconstructed image with artifact reduction.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Zhang in view of Zeng to include acquiring image data in which noise is reduced by applying a machine learning model trained to reduce noise taught by Ziabari. The motivation for doing so would have been to reduce the amount of artifact and thereby increasing the resolution of the imaging system as suggested by Ziabari (see Ziabari, paragraph [0048]).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results Therefore, it would have been obvious to combine Ziabari with Zhang and Zeng to obtain the invention specified in claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dong et al (US 9,443,295 B2) discloses a method to reduce artifacts in reconstructed CT images by processing the original reconstructed image with total variation and artifact reduction techniques.
Hein et al. (US 2021/0012543 A1) discloses using deep learning networks to reduce noise and artifacts in reconstructed CT images.
Li et al. (US 2023/0061863 A1) discloses using a trained deep neural network to reduce artifacts in digital breast tomosynthesis (DBT) image reconstruction.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UROOJ FATIMA/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676