DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3, 8, 11-15, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyagi (US 20200380680) and further in view of Matsuura (US 20220327662).
Regarding claims 1 and 15, Aoyagi discloses a computer-implemented method for processing medical computed tomography (CT) data ([0028] – “The diagnosis support apparatus 100 is configured as a so-called workstation or a high-performance personal computer”, Claim 10 – “An X-ray CT apparatus comprising:…processing circuitry configured to”), comprising: [claim 1]
a system for processing medical computed tomography (CT) data ([0026] – “a medical image processing system”, [0027] – “the modality 510 is configured as, for example, an X-ray CT apparatus 511”), comprising: [claim 15]
a memory that stores a plurality of instructions ([0032] – “The memory 30 stores, in addition to the trained model, various programs to be executed by the processor included in the processing circuitry 20”); and [claim 15]
a processor that couples to the memory and is configured to execute the plurality of instructions ([0032] – “The memory 30 stores, in addition to the trained model, various programs to be executed by the processor included in the processing circuitry 20”) to: [claim 15]
retrieving projection data acquired by scanning a subject ([0098] – “The CT scanner 770 rotates an X-ray tube and an X-ray detector at high speed around the object so as to acquire projection data”);
processing the projection data to reconstruct a three-dimensional image (claim 10 – “generate a three-dimensional volume image from data acquired”, [0101] – “a 3D image reconstruction function 712 for scanogram images”);
generating a two-dimensional image based on the three-dimensional image (Claim 10 – “generate a two-dimensional virtual projection image from the three-dimensional volume image”);
identifying a disease or an uncertainty in an identification of the disease in the two- dimensional image (Claim 10 – “infer a disease by using the two-dimensional virtual projection image”);
[…] updated three- dimensional image, wherein at least one parameter […] is based on the disease or the uncertainty in the identification of the disease (claim 10 – “determine an imaging condition of the first imaging based on information on the disease having been inferred”, [0131] – “when the inferred disease is pulmonary nodule or cardiomegalia, as illustrated in FIG. 14, imaging conditions for the detailed examination are set for the lung region and the cardiac region that are important for diagnosing the inferred disease”, [0133] – “step ST505, reconstruction images are generated by the reconstruction method”, [0107] – “a 3D volume image that is reconstructed from the data acquired by scenography”); and
generating an updated two-dimensional image based on the updated three-dimensional image ([0066] – “the acquired new 3D volume image is projected to generate a second medical image (i.e., a new virtual projection image)…The new virtual projection image generated in the step ST202 is a 2D virtual projection image”, although the “new” image is not disclosed as the updated detailed examination image one with ordinary skill in the art would find it obvious to generate an updated 2D virtual projection image because it shows more detail to show the disease more precisely).
As cited above Aoyagi teaches an updated image however this is a new image, therefore Aoyagi does not teach reprocessing the projection data or the three-dimensional image into an updated three-dimensional image, wherein at least one parameter of the reprocessing is based on the disease or the uncertainty in the identification of the disease;
However Matsuura discloses reprocessing the projection data or the three-dimensional image into an updated three-dimensional image ([0086] – “The data processing function 445 inputs the first medical data to the noise-reduction super-resolution model”, [0088] – “if the first projection data is used as the first medical data, the data processing function 445 outputs the second projection data (high resolution and low noise) as the second medical data from the noise-reduction super-resolution mode”), wherein at least one parameter of the reprocessing is based on the disease or the uncertainty in the identification of the disease ([0086] – “The data processing function 445 inputs the first medical data to the noise-reduction super-resolution model”, [0088] – “if the first projection data is used as the first medical data, the data processing function 445 outputs the second projection data (high resolution and low noise) as the second medical data from the noise-reduction super-resolution mode”, one with ordinary skill in the art would recognize that at least one parameter would change to create the low noise image and in view of the teachings of Aoyagi one with ordinary skill in the art would find it obvious to use a parameter based on the region of the disease);
The disclosure of Matsuura is an analogous art considering it is in the field of producing an improved CT image.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system of Aoyagi to incorporate the reprocessing of image data of Matsuura to achieve the same results. One would have motivation to combine because it would provide a low noise image without requiring the patient to be imaged second time for a detailed image and therefore reduce the radiation dose.
Regarding claim 3, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
Conversely Aoyagi does not teach wherein reprocessing the projection data or the three- dimensional image is performed by a neural network.
However Matsuura discloses wherein reprocessing the projection data or the three- dimensional image is performed by a neural network ([0065] – “a learned model generating second medical data having lower noise than that of first medical data and having a super resolution compared with the first medical data on the basis of the first medical data… The learned model is a model achieving lower noise and higher resolution on input medical data and is generated by, for example, learning of a deep convolution neural network (hereinafter called a DCNN)”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the reprocessing of image data using a neural network of Matsuura to achieve the same results. One would have motivation to combine because it “can perform superior image quality and processing speed compared to conventional methods” (Matsuura [0009]).
Regarding claim 8 and 18, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claims 1 and 15.
Conversely Aoyagi does not teach further comprising denoising the three-dimensional image and/or the updated the three-dimensional image by a trained convolutional neural network (CNN).
However Matsuura discloses further comprising denoising the three-dimensional image and/or the updated the three-dimensional image by a trained convolutional neural network (CNN) ([0065] – “a learned model generating second medical data having lower noise than that of first medical data and having a super resolution compared with the first medical data on the basis of the first medical data… The learned model is a model achieving lower noise and higher resolution on input medical data and is generated by, for example, learning of a deep convolution neural network (hereinafter called a DCNN)”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method and system of Aoyagi to incorporate the reprocessing of image data using a CNN of Matsuura to achieve the same results. One would have motivation to combine because it “can perform superior image quality and processing speed compared to conventional methods” (Matsuura [0009]).
Regarding claim 11, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
Aoyagi further discloses further comprising identifying at least one physical element in the three-dimensional image and/or the updated three-dimensional image, and removing or masking out the at least one physical element prior to generating an output image ([0080] – “among the three lung nodules, the lung nodule in the upper part of the left lung is hidden by the bone in the training virtual projection image”, [0084] – “bone removal processing is performed on the new 3D volume image to generate a bone-removed 3D volume image”).
Regarding claim 12, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claims 1 and 11.
Aoyagi further discloses wherein the at least one physical element is a plurality of ribs or a heart ([0084] – “bone removal processing is performed on the new 3D volume image to generate a bone-removed 3D volume image”, as shown in the lower right hand side of Figs. 7 and 8 the ribs are removed).
Regarding claim 13, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claims 1 and 11.
Aoyagi further discloses wherein removing or masking out the at least one physical element is based on the disease or the uncertainty in the identification of the disease in the two- dimensional image ([0080] – “among the three lung nodules, the lung nodule in the upper part of the left lung is hidden by the bone in the training virtual projection image”, [0084] – “bone removal processing is performed on the new 3D volume image to generate a bone-removed 3D volume image” as shown in the lower right hand side of Figs. 7 and 8 the ribs are removed from the 2D projection image).
Regarding claim 14, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
Aoyagi further discloses wherein identifying the disease includes a use of a trained disease classification model ([0039] – “acquire disease information 228 inferred by the trained model 226”).
Claims 2, 9-10, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyagi (US 20200380680) and Matsuura (US 20220327662) as applied to claims 1 and 15 above, and further in view of Tolkowsky (US 20220110698).
Regarding claim 2, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
Conversely Aoyagi does not teach wherein generating the two-dimensional image is performed by a neural network.
However Tolkowsky wherein generating the two-dimensional image is performed by a neural network ([0659] – “In the case of 3D CT images, the derived 2D projections are known as Digitally Reconstructed Radiographs (DRRs)”, [0708] – “the steps of generating a plurality of DRRs from a 3D CT image, and identifying respective first and second DRRs that match the 2D x-ray images of the vertebra are aided by deep-learning algorithms”).
The disclosure of Tolkowsky is an analogous art considering it is in the field of producing 2D projection images from the 3D volume image.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the generation of 2D images using a neural network of Tolkowsky to achieve the same results. One would have motivation to combine because it would provide an automatic selection of the best 2D images.
Regarding claim 9, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
As cited above Aoyagi discloses a two-dimensional projection image that is used to determine a disease to then perform a detail examination scan therefore the two-dimensional image would be using a first value for the at least one parameter. Conversely Aoyagi does not explicitly teach wherein generating the two-dimensional image comprises using a digitally reconstructed radiograph (DRR) network […].
However Tolkowsky wherein generating the two-dimensional image comprises using a digitally reconstructed radiograph (DRR) network ([0659] – “In the case of 3D CT images, the derived 2D projections are known as Digitally Reconstructed Radiographs (DRRs)”) […].
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the generation of 2D images using a neural network of Tolkowsky to achieve the same results. One would have motivation to combine because it can provide a 2D image at any simulated x-ray camera position.
Regarding claim 10, Aoyagi, Matsuura, and Tolkowsky disclose all the elements of the claimed invention as cited in claims 1 and 9.
As cited above Aoyagi discloses acquiring a new 3D volume image that is projected to generate a new virtual projection image, although the “new” image is not disclosed as the updated detailed examination image one with ordinary skill in the art would find it obvious to generate an updated 2D virtual projection image from the 3D volume image of the detailed examination because it shows more detail to show the disease more precisely. The detailed examination uses a second value based on the disease or the uncertainty in the identification of the disease. Therefore because the updated two-dimensional image is a more detailed examination it would be using a second value for the at least one parameter. Conversely Aoyagi does not explicitly teach wherein generating the […] two-dimensional image comprises using the DRR network […].
However Tolkowsky wherein generating the […] two-dimensional image comprises using the DRR network ([0659] – “In the case of 3D CT images, the derived 2D projections are known as Digitally Reconstructed Radiographs (DRRs)”) […].
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the generation of 2D images using a neural network of Tolkowsky to achieve the same results. One would have motivation to combine because it can provide a 2D image at any simulated x-ray camera position.
Regarding claim 16, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 15.
Conversely Aoyagi does not teach wherein generating the two-dimensional image and reprocessing the projection data or the three-dimensional image are performed by a neural network.
However Matsuura discloses wherein […] reprocessing the projection data or the three- dimensional image is performed by a neural network ([0065] – “a learned model generating second medical data having lower noise than that of first medical data and having a super resolution compared with the first medical data on the basis of the first medical data… The learned model is a model achieving lower noise and higher resolution on input medical data and is generated by, for example, learning of a deep convolution neural network (hereinafter called a DCNN)”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Aoyagi to incorporate the reprocessing of image data using a neural network of Matsuura to achieve the same results. One would have motivation to combine because it “can perform superior image quality and processing speed compared to conventional methods” (Matsuura [0009]).
Conversely Aoyagi and Matsuura do not teach wherein generating the two-dimensional image […] is performed by a neural network.
However Tolkowsky wherein generating the two-dimensional image […] is performed by a neural network ([0659] – “In the case of 3D CT images, the derived 2D projections are known as Digitally Reconstructed Radiographs (DRRs)”, [0708] – “the steps of generating a plurality of DRRs from a 3D CT image, and identifying respective first and second DRRs that match the 2D x-ray images of the vertebra are aided by deep-learning algorithms”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Aoyagi to incorporate the generation of 2D images using a neural network of Tolkowsky to achieve the same results. One would have motivation to combine because it would provide an automatic selection of the best 2D images.
Regarding claim 19, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 15.
As cited above Aoyagi discloses a two-dimensional projection image that is used to determine a disease to then perform a detail examination scan therefore the two-dimensional image would be using a first value for the at least one parameter. Additionally as cited above Aoyagi discloses acquiring a new 3D volume image that is projected to generate a new virtual projection image, although the “new” image is not disclosed as the updated detailed examination image one with ordinary skill in the art would find it obvious to generate an updated 2D virtual projection image from the 3D volume image of the detailed examination because it shows more detail to show the disease more precisely. The detailed examination uses a second value based on the disease or the uncertainty in the identification of the disease. Therefore because the updated two-dimensional image is a more detailed examination it would be using a second value for the at least one parameter. Conversely Aoyagi does not explicitly teach wherein the two-dimensional images are generated by a digitally reconstructed radiograph (DRR) network.
However Tolkowsky wherein the two-dimensional images are generated by a digitally reconstructed radiograph (DRR) network ([0659] – “In the case of 3D CT images, the derived 2D projections are known as Digitally Reconstructed Radiographs (DRRs)”) […].
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Aoyagi to incorporate the generation of 2D images using a neural network of Tolkowsky to achieve the same results. One would have motivation to combine because it can provide a 2D image at any simulated x-ray camera position.
Claims 4, 5, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyagi (US 20200380680) and Matsuura (US 20220327662) as applied to claims 1 and 15 above, and further in view of Nett (US 20210279847).
Regarding claim 4, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
Conversely Aoyagi does not teach further comprising denoising the three-dimensional image using a first value for the at least one parameter.
However Nett discloses further comprising denoising the three-dimensional image using a first value for the at least one parameter ([0073] – “Denoising 504 is applied to the first image volume 501 to produce a first denoised image volume Vol1.sub.100”).
The disclosure of Nett is an analogous art considering it is in the field of computed tomography imaging.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the denoising of the three-dimension image of Nett to achieve the same results. One would have motivation to combine because it provides a clearer image which would be beneficial in identifying a disease.
Regarding claim 5, Aoyagi, Matsuura, and Nett disclose all the elements of the claimed invention as cited in claims 1 and 4.
As cited above Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image. Conversely Aoyagi does not teach further comprising denoising the updated three-dimensional image using a second value for the at least one parameter, the second value based on the disease or the uncertainty in the identification of the disease.
However Nett discloses further comprising denoising the updated three-dimensional image using a second value for the at least one parameter, the second value based on the disease or the uncertainty in the identification of the disease ([0073] – “Denoising 506 is similarly applied to the second image volume 502 to produce a second denoised image volume”, because Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image one with ordinary skill in the art would find it obvious for at least one parameter to be different in the updated image when the teachings of Nett are combined with Aoyagi).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the denoising of the three-dimension image of Nett to achieve the same results. One would have motivation to combine because it provides a clearer image which would be beneficial in viewing a disease within an image.
Regarding claim 17, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 15.
As cited above Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image. Conversely Aoyagi does not teach wherein the three-dimensional image is denoised using a first value for the at least one parameter, and wherein the updated three-dimensional image is denoised using a second value for the at least one parameter, the second value based on the disease or the uncertainty in the identification of the disease.
However Nett discloses wherein the three-dimensional image is denoised using a first value for the at least one parameter ([0073] – “Denoising 504 is applied to the first image volume 501 to produce a first denoised image volume Vol1.sub.100”), and wherein the updated three-dimensional image is denoised using a second value for the at least one parameter, the second value based on the disease or the uncertainty in the identification of the disease ([0073] – “Denoising 506 is similarly applied to the second image volume 502 to produce a second denoised image volume”, because Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image one with ordinary skill in the art would find it obvious for at least one parameter to be different in the updated image when the teachings of Nett are combined with Aoyagi).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Aoyagi to incorporate the denoising of the three-dimension image of Nett to achieve the same results. One would have motivation to combine because it provides a clearer image which would be beneficial in viewing a disease within an image.
Claims 6, 7, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyagi (US 20200380680) and Matsuura (US 20220327662) as applied to claims 1 and 15 above, and further in view of Hirakawa (US 20200058098).
Regarding claim 6, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 1.
Conversely Aoyagi does not teach further comprising applying, to the three-dimensional image, an artificial intelligence (Al) based super-resolution using a first value for the at least one parameter such that the three-dimensional image results in a high resolution image.
However Hirakawa discloses further comprising applying, to the three-dimensional image, an artificial intelligence (Al) based super-resolution using a first value for the at least one parameter such that the three-dimensional image results in a high resolution image ([0050] – “The converted image acquisition unit 22 performs super-resolution processing for the first three-dimensional image OG1 and the second three-dimensional image OG2”, [0051] – “A learned model M is a neural network which has been subjected to deep learning to generate a converted image obtained by performing super-resolution processing for a three-dimensional image from the three-dimensional image”).
The disclosure of Hirakawa is an analogous art considering it is in the field of computed tomography imaging.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the super-resolution of the three-dimension image of Hirakawa to achieve the same results. One would have motivation to combine because it provides a better quality image which would be beneficial in identifying a disease.
Regarding claim 7, Aoyagi, Matsuura, and Hirakawa disclose all the elements of the claimed invention as cited in claims 1 and 6.
As cited above Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image. Conversely Aoyagi does not teach further comprising applying, to the updated three- dimensional image, the Al based super-resolution using a second value for the at least one parameter such that the updated three-dimensional image results in another high resolution image, wherein the second value is based on the disease or the uncertainty in the identification of the disease.
However Hirakawa discloses further comprising applying, to the updated three- dimensional image, the Al based super-resolution using a second value for the at least one parameter such that the updated three-dimensional image results in another high resolution image, wherein the second value is based on the disease or the uncertainty in the identification of the disease ([0050] – “The converted image acquisition unit 22 performs super-resolution processing for the first three-dimensional image OG1 and the second three-dimensional image OG2 to acquire a first converted image TG1 and a second converted image TG2”, because Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image one with ordinary skill in the art would find it obvious for at least one parameter to be different in the updated image when the teachings of Hirakawa are combined with Aoyagi).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Aoyagi to incorporate the super-resolution of the three-dimension image of Hirakawa to achieve the same results. One would have motivation to combine because it provides a better quality image which would be beneficial in identifying a disease.
Regarding claim 20, Aoyagi and Matsuura disclose all the elements of the claimed invention as cited in claim 15.
As cited above Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image. Conversely Aoyagi does not teach wherein an artificial intelligence (Al) based super- resolution using a first value for the at least one parameter is applied to the three-dimensional image such that the three-dimensional image results in a high resolution image, and wherein the Al based super-resolution using a second value for the at least one parameter is applied to the updated three-dimensional image such that the updated three-dimensional image results in another high resolution image, wherein the second value is based on the disease or the uncertainty in the identification of the disease.
However Hirakawa discloses wherein an artificial intelligence (Al) based super- resolution using a first value for the at least one parameter is applied to the three-dimensional image such that the three-dimensional image results in a high resolution image ([0050] – “The converted image acquisition unit 22 performs super-resolution processing for the first three-dimensional image OG1 and the second three-dimensional image OG2”, [0051] – “A learned model M is a neural network which has been subjected to deep learning to generate a converted image obtained by performing super-resolution processing for a three-dimensional image from the three-dimensional image”), and wherein the Al based super-resolution using a second value for the at least one parameter is applied to the updated three-dimensional image such that the updated three-dimensional image results in another high resolution image, wherein the second value is based on the disease or the uncertainty in the identification of the disease ([0050] – “The converted image acquisition unit 22 performs super-resolution processing for the first three-dimensional image OG1 and the second three-dimensional image OG2 to acquire a first converted image TG1 and a second converted image TG2”, because Aoyagi teaches different imaging conditions based on the identification of the disease for the updated image one with ordinary skill in the art would find it obvious for at least one parameter to be different in the updated image when the teachings of Hirakawa are combined with Aoyagi).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Aoyagi to incorporate the super-resolution of the three-dimension image of Hirakawa to achieve the same results. One would have motivation to combine because it provides a better quality image which would be beneficial in identifying a disease.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RENEE C LANGHALS whose telephone number is (571)272-6258. The examiner can normally be reached Mon.-Thurs. alternate Fridays 8:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached at 571-272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.C.L./ Examiner, Art Unit 3797
/CHRISTOPHER KOHARSKI/ Supervisory Patent Examiner, Art Unit 3797