Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Office action in response to applicant amendment entered 12/22/2025. Claims 1, 9 and 10 are amended, claim 11 added, claim 4 previously canceled; claims 1-3, 5-11 remain pending in this application.
Response to Arguments
Applicant's arguments filed 12/22/2025 regarding claims 1-3, 5-11 have been fully considered but they are not persuasive.
Applicant contends that Tsukuda does not teach the amended limitation that is set in accordance with a purpose of imaging. Examiner respectfully disagrees.
In response to applicants’ argument, Examiner notes that Tsukuda teaches, as claimed in claims 1, 9 and 10: a predetermined ratio (Tsukuda Fig. 2 S209, Fig. 6, [0073]- [0074] “…The mixing ratio between the image 509 and the combined image 604 can also be changed in the combination to obtain the combined image 605, and a specific frequency band can thus be enhanced….” See also weighting coefficients) that is set in accordance with a purpose of imaging (Tsukuda Fig. 2 S209, Fig. 6, [0073]- [0074] “.. a specific frequency band can thus be enhanced….” And ¶72 “a plurality of frequency bands are used, enables a target object to be stably extracted” thus purpose of imaging to extract target object). Further the limitation “set in accordance with a purpose of imaging” is merely describing an intended environment.
Thus the rejection is maintained though amended to include the amended claim language.
Priority
Priority is claimed to foreign application JP2021-157099 filed 09/27/2021. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claims are given a priority date of 09/27/2021.
Claim Rejections - 35 USC § 102 / 103
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 6-7 and 9-10 rejected under 35 U.S.C. 102(a)(2) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over US 20220383466 A1 Tsukuda; Akira et al.
Consider Claims 1, 9 and 10
In the language of claim 1:
Tsukuda teaches An image processing device (Tsukuda Fig. 1 111 computer/image processing apparatus [0033], Fig. 2 method) comprising:
at least one processor (Tsukuda Fig. 1, [0033], [0041]: “one or more processors of the computer 111”),
wherein the processor derives a first composition image representing a first composition ( Fig. 2 step S208 [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into an image of the target object (a band limitation image in which the target object is enhanced) ..”) included in a subject including three or more compositions from at least one radiation image acquired by imaging the subject ([0050] “..It is assumed that in FIGS. 3A and 3B, the image contains three types of materials, namely a guide wire (which may alternatively be a stent, a coil etc.) serving as a target object, a bone, and a soft tissue ..” [0104] “..according to the first to fifth embodiments, when the human body contains three or more materials, such as a contrast agent, a stent, and a guide wire, in addition to bones and soft tissues when spectral imaging of two-dimensional X-ray images is performed, for example, decomposition into these three or more materials can be enabled with a smaller number of times of imaging.”),
derives at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image (Fig. 2, step s208 [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into.. a bone image (a band limitation image in which the bone is enhanced..”; [0052] Next, the frequency decomposition unit 116 performs frequency decomposition on the soft tissue removal image 303 to divide the soft tissue removal image 303 into a plurality of frequency components, and generates a plurality of band limitation images (step S208). According to an example, here, the highest-frequency band limitation image is an image that mainly contains noise components (a noise image 305), and the next highest-frequency band limitation image is an image that most strongly contains guide wire components (a wire image 306). The band limitation image of low frequency components is an image that strongly contains the bone components (a bone image 307);
derives a plurality of other composition images representing a plurality of other compositions different from the first composition included in the subject by using the at least one removal radiation image (Tsukuda [0093 “ the image obtaining unit 112 obtains a high tube voltage image 1005 and a low tube voltage image 1006 that contain the bone, the soft tissue, the stent, and the contrast agent (step S903). ..a soft tissue image 1008 corresponding to the soft tissue image and a soft tissue removal image 1007 corresponding to the bone and the target objects (the contrast agent and the stent) are generated. [0094] Of the bone, the contrast agent, and the stent, the bone moves only slightly and therefore appears at the same position in the soft tissue removal image 1004 and the soft tissue removal image 1007. Therefore, the planar distribution obtaining unit 115 obtains a subtraction image 1009 from which the bone has been removed, by subtracting the soft tissue removal image 1004 from the soft tissue removal image 1007 (step S905)..”; see also Fig. 2 step S207-s208 [0048]: “..the planar distribution obtaining unit 115 calculates and generates a decomposition image representing the planar distribution related to the materials from the high tube voltage image and the low tube voltage image he planar distribution obtaining unit 115 obtains two decomposition images, which are a bone image and a soft tissue image, as the planar distributions related to the materials…” [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into an image of the target object (a band limitation image in which the target object is enhanced) and a bone image (a band limitation image in which the bone is enhanced..”), and
derives a composite image obtained by performing weighting addition between the first composition image and the plurality of other composition images (Tsukuda Fig. 2 S209, Fig. 6, [0073]- [0074] “..plurality of band limitation images are combined instead of extracting one of a plurality of band limitation images in step S209. FIG. 6 is a flowchart illustrating combination processing in step S209. Images 507, 508, and 509 are band limitation images obtained by performing frequency decomposition described with reference to FIG. 5. The output image generation unit 117 multiplies the image 508 by a weighting coefficient g.sub.3 through weighting processing 603, …multiplies the image 509 by a weighting coefficient g.sub.2 through weighting processing to obtain a weighted image. … to obtain a combined image 604. As a result, the mixing ratio between the image 509 and the image 508 can be changed in the combination to obtain the combined image 604, and a specific frequency band can then be enhanced. Similarly, the output image generation unit 117 performs addition processing to add a weighted image obtained by multiplying the image 508 by a weighting coefficient g.sub.1 through weighting processing to an enlarged image obtained by enlarging the combined image 604, and thus obtains a combined image 605..”) at a predetermined ratio (Tsukuda Fig. 2 S209, Fig. 6, [0073]- [0074] “…The mixing ratio between the image 509 and the combined image 604 can also be changed in the combination to obtain the combined image 605, and a specific frequency band can thus be enhanced….” See also weighting coefficients) that is set in accordance with a purpose of imaging (Tsukuda Fig. 2 S209, Fig. 6, [0073]- [0074] “.. a specific frequency band can thus be enhanced….” And ¶72 “a plurality of frequency bands are used, enables a target object to be stably extracted” thus purpose of imaging to extract target object),
derives a first removal radiation image and a second removal radiation image obtained by removing the first composition from a first radiation image and a second radiation image by using the first composition image (Tsukuda Fig. 2, step s208 [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into.. a bone image (a band limitation image in which the bone is enhanced..”; [0052] Next, the frequency decomposition unit 116 performs frequency decomposition on the soft tissue removal image 303 to divide the soft tissue removal image 303 into a plurality of frequency components, and generates a plurality of band limitation images (step S208). According to an example, here, the highest-frequency band limitation image is an image that mainly contains noise components (a noise image 305), and the next highest-frequency band limitation image is an image that most strongly contains guide wire components (a wire image 306). The band limitation image of low frequency components is an image that strongly contains the bone components (a bone image 307)), and derives the plurality of other composition images by performing weighting subtraction on the first removal radiation image and the second removal radiation (Tsukuda [0093 “ the image obtaining unit 112 obtains a high tube voltage image 1005 and a low tube voltage image 1006 that contain the bone, the soft tissue, the stent, and the contrast agent (step S903). ..a soft tissue image 1008 corresponding to the soft tissue image and a soft tissue removal image 1007 corresponding to the bone and the target objects (the contrast agent and the stent) are generated. [0094] Of the bone, the contrast agent, and the stent, the bone moves only slightly and therefore appears at the same position in the soft tissue removal image 1004 and the soft tissue removal image 1007. Therefore, the planar distribution obtaining unit 115 obtains a subtraction image 1009 from which the bone has been removed, by subtracting the soft tissue removal image 1004 from the soft tissue removal image 1007 (step S905)..”; see also Fig. 2 step S207-s208 [0048]: “..the planar distribution obtaining unit 115 calculates and generates a decomposition image representing the planar distribution related to the materials from the high tube voltage image and the low tube voltage image he planar distribution obtaining unit 115 obtains two decomposition images, which are a bone image and a soft tissue image, as the planar distributions related to the materials…” [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into an image of the target object (a band limitation image in which the target object is enhanced) and a bone image (a band limitation image in which the bone is enhanced..”).
And wherein the first composition is an artificial object (Tsukuda [0050] “..It is assumed that in FIGS. 3A and 3B, the image contains … a guide wire (which may alternatively be a stent, a coil etc.)..”).
In the same analysis and grounds cited for claim 1, Tsukuda teaches An image processing method of claim 9 and a non-transitory computer-readable storage medium that stores an image processing program of claim 10.
Consider Claim 2.
Tsukuda teaches The image processing device according to claim 1, wherein the processor acquires a first radiation image and a second radiation image acquired by imaging the subject with radiation having different energy distributions, and derives the first composition image (Fig. 1, Fig. 2, step S207, [0048] “ the planar distribution obtaining unit 115 generates a decomposition image representing the planar distribution related to materials through material decomposition or material identification from two or more X-ray images obtained by imaging an object that contains the target object by means of radiation of different levels of energy.”) by performing weighting subtraction on the first radiation image and the second radiation image ([0036]-[0037] “..The planar distribution obtaining unit 115 obtains a decomposition image representing the planar distribution by means of energy subtraction..”).
Consider Claim 4
Tsukuda teaches The image processing device according to claim 2,
wherein the processor derives a first removal radiation image and a second removal radiation image obtained by removing the first composition from the first radiation image and the second radiation image by using the first composition image (Tsukuda Fig. 3, [0093] Next, the image obtaining unit 112 obtains a high tube voltage image 1005 and a low tube voltage image 1006 that contain the bone, the soft tissue, the stent, and the contrast agent (step S903). The planar distribution obtaining unit 115 derives a planar distribution related to the materials (material decomposition in this example) for the high tube voltage image 1005 and the low tube voltage image 1006 (step S904). Thus, a soft tissue image 1008 corresponding to the soft tissue image and a soft tissue removal image 1007 corresponding to the bone and the target objects (the contrast agent and the stent) are generated. [0094] Of the bone, the contrast agent, and the stent, the bone moves only slightly and therefore appears at the same position in the soft tissue removal image 1004 and the soft tissue removal image 1007.), and
derives the plurality of other composition images by performing weighting subtraction on the first removal radiation image and the second removal radiation image (Tsukuda Fig. 3, [0094] “Therefore, the planar distribution obtaining unit 115 obtains a subtraction image 1009 from which the bone has been removed, by subtracting the soft tissue removal image 1004 from the soft tissue removal image 1007 (step S905)”).
Consider Claim 6
Tsukuda teaches The image processing device according to claim 1, wherein the processor is able to change the predetermined ratio (Tsukuda Fig. 2 S209, Fig. 6, [0073]- [0074] “..The mixing ratio between the image 509 and the combined image 604 can also be changed in the combination to obtain the combined image 605, and a specific frequency band can thus be enhanced….” See also Weighting coefficents).
Consider Claim 7
Tsukuda teaches The image processing device according to claim 1, wherein the other compositions are a bone part and a soft part (Tsukuda [0050] “..It is assumed that in FIGS. 3A and 3B, the image contains three types of materials, namely a guide wire (which may alternatively be a stent, a coil etc.) serving as a target object, a bone, and a soft tissue ..”).
Claim(s) 3 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over US 20220383466 A1 Tsukuda; Akira et al. in view of US 20230097849 A1 TAKAHASHI; Wataru et al.
Consider Claim 3
Tsukuda teaches The image processing device according to claim 1, wherein the processor acquires a first radiation image and a second radiation image acquired by imaging the subject with radiation having different energy distributions (Fig. 1, Fig. 2, step S207, [0048] “ the planar distribution obtaining unit 115 generates a decomposition image representing the planar distribution related to materials through material decomposition or material identification from two or more X-ray images obtained by imaging an object that contains the target object by means of radiation of different levels of energy.”), and derives the first composition image from the first radiation image or the second radiation image by using a derivation model to derive the first composition image from a radiation image (Fig. 1, Fig. 2, step S207 [0036]-[0037] “..The planar distribution obtaining unit 115 obtains a decomposition image representing the planar distribution by means of energy subtraction..”).
Tsukuda does not teach using a derivation model that has been subjected to machine learning to derive the first composition image from a radiation image.
Examiner notes the claim limitation “derivation model that has been subjected to machine learning” broadly describes the intended environment and does not limit the derivation model. Nonetheless, TAKAHASHI teaches using a derivation model that has been subjected to machine learning to derive the first composition image from a radiation image (TAKAHASHI Fig. 4, Fig. 5, 7, [0066]-[0068] “…The machine learning is performed for each image element of the plurality of image elements 50 to be extracted. That is, the training data 66 is prepared for each image element 50 to be extracted… As shown in FIG. 4, the plurality of image elements 50 include a first element 51, which is the biological tissue, and a second element 52, which is a non-biological tissue. In addition, the plurality of image elements 50 include at least a plurality of image elements of a bone 53, a blood vessel 54, a device 55 introduced into the body, clothing 56, a noise 57, and a scattered ray component 58 of the X-rays. Among these, the bone 53 and the blood vessel 54 correspond to the first element 51. The first element 51 may include the biological tissue other than the bone 53 and the blood vessel 54. Among these, the device 55 introduced into the body, the clothing 56, the noise 57, and the scattered ray component 58 of the X-rays correspond to the second element 52. The …In the example shown in FIG. 5, one trained model 40 is configured to separately extract the plurality of image elements 50…”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, to modify the invention of Tsukuda to include the noted teachings of TAKAHASHI in order to enable even on a plurality of image elements to improve the visibility of medical images in various usage scenes (TAKAHASHI [0005]).
Consider Claim 5
Tsukuda teaches The image processing device according to claim 1,
wherein the processor derives the first composition image from one radiation image by using a first derivation model to derive the first composition image from the radiation image (Tsukuda Fig. 2 step S208 [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into an image of the target object (a band limitation image in which the target object is enhanced) ..”),
derives at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image (Tsukuda Fig. 2, step s208 [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into.. a bone image (a band limitation image in which the bone is enhanced..”; [0052] Next, the frequency decomposition unit 116 performs frequency decomposition on the soft tissue removal image 303 to divide the soft tissue removal image 303 into a plurality of frequency components, and generates a plurality of band limitation images (step S208). According to an example, here, the highest-frequency band limitation image is an image that mainly contains noise components (a noise image 305), and the next highest-frequency band limitation image is an image that most strongly contains guide wire components (a wire image 306). The band limitation image of low frequency components is an image that strongly contains the bone components (a bone image 307), and
derives the plurality of other composition images from one removal radiation image by using a second derivation model that to derive the plurality of other composition images from the removal radiation image (Tsukuda [0093 “ the image obtaining unit 112 obtains a high tube voltage image 1005 and a low tube voltage image 1006 that contain the bone, the soft tissue, the stent, and the contrast agent (step S903). ..a soft tissue image 1008 corresponding to the soft tissue image and a soft tissue removal image 1007 corresponding to the bone and the target objects (the contrast agent and the stent) are generated. [0094] Of the bone, the contrast agent, and the stent, the bone moves only slightly and therefore appears at the same position in the soft tissue removal image 1004 and the soft tissue removal image 1007. Therefore, the planar distribution obtaining unit 115 obtains a subtraction image 1009 from which the bone has been removed, by subtracting the soft tissue removal image 1004 from the soft tissue removal image 1007 (step S905)..”; see also Fig. 2 step S207-s208 [0048]: “..the planar distribution obtaining unit 115 calculates and generates a decomposition image representing the planar distribution related to the materials from the high tube voltage image and the low tube voltage image he planar distribution obtaining unit 115 obtains two decomposition images, which are a bone image and a soft tissue image, as the planar distributions related to the materials…” [0048] “..The frequency decomposition unit 116 performs frequency decomposition on the bone image and thus decomposes the bone image into an image of the target object (a band limitation image in which the target object is enhanced) and a bone image (a band limitation image in which the bone is enhanced..”).
Tsukuda does not teach using a first derivation model that has been subjected to machine learning to derive the first composition image; and using a second derivation model that has been subjected to machine learning to derive the plurality of other composition images from the removal radiation image.
Examiner notes the claim limitation “derivation model that has been subjected to machine learning” broadly describes the intended environment and does not limit the derivation model.
Nonetheless, TAKAHASHI teaches using a first derivation model that has been subjected to machine learning to derive the first composition image; and using a second derivation model that has been subjected to machine learning to derive the plurality of other composition images from the removal radiation image (TAKAHASHI Fig. 4, Fig. 5, 7, [0066]-[0068] “…The machine learning is performed for each image element of the plurality of image elements 50 to be extracted. That is, the training data 66 is prepared for each image element 50 to be extracted… As shown in FIG. 4, the plurality of image elements 50 include a first element 51, which is the biological tissue, and a second element 52, which is a non-biological tissue. In addition, the plurality of image elements 50 include at least a plurality of image elements of a bone 53, a blood vessel 54, a device 55 introduced into the body, clothing 56, a noise 57, and a scattered ray component 58 of the X-rays. Among these, the bone 53 and the blood vessel 54 correspond to the first element 51. The first element 51 may include the biological tissue other than the bone 53 and the blood vessel 54. Among these, the device 55 introduced into the body, the clothing 56, the noise 57, and the scattered ray component 58 of the X-rays correspond to the second element 52. The …In the example shown in FIG. 5, one trained model 40 is configured to separately extract the plurality of image elements 50…”; [0053], Fig. 29, [0192]: “..The extraction processing unit 20 extracts the image element 50 using one or a plurality of trained models 40…”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, to modify the invention of Tsukuda to include the noted teachings of TAKAHASHI in order to enable even on a plurality of image elements to improve the visibility of medical images in various usage scenes (TAKAHASHI [0005]).
Claim(s) 8 is rejected under 35 U.S.C. 103 as being unpatentable over US 20220383466 A1 Tsukuda; Akira et al. in view of US 20210118193 A1 Torii; Sota et al.
Consider Claim 8
Tsukuda teaches The image processing device according to claim 1, wherein the other compositions are a bone part, and a soft part (Tsukuda [0050] “..It is assumed that in FIGS. 3A and 3B, the image contains three types of materials, namely a guide wire (which may alternatively be a stent, a coil etc.) serving as a target object, a bone, and a soft tissue ..”).
Tsukuda does not explicitly disclose fat and muscle.
Torii teaches wherein the first composition is an artificial object, and the other compositions are a bone part, fat, and muscle (Torii Fig. 10, [0109] FIG. 12 is a diagram exemplarily showing regions of interest in a radiation image. As shown in FIG. 12, it is permissible to perform location recognition by executing image processing with respect to the radiation image, extract a first material (e.g., body fat), a second material (e.g., bones), and a third material (e.g., a medical device, such as a catheter and a stent), and use the values of corresponding effective atomic numbers of the materials as effective atomic numbers in the energy table…”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, to modify the invention of Tsukuda to include the noted teachings of Torii in order to enables reconstruction of a radiation image by setting different radiation energies for respective materials (Torii [0007]).
Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over US 20220383466 A1 Tsukuda; Akira et al. in view of US 20220092787 A1 CUI; Kai et al.
Consider Claim 11
Tsukuda teaches The image processing device according to wherein the processor derives the first composition image by detecting an artificial object region in the at least one radiation image (¶90 “A soft tissue removal image serving as a bone mask image (steps S901 and S902) and a soft tissue removal image that contains the bone and the target objects (contrast agent and stent) are obtained (steps S903 and S904) up to the above.”)),
removing the detected artificial object region from the at least one radiation image (¶90 “ The frequency decomposition unit 116 removes bone components from the soft tissue removal image obtained in step S904 by performing this subtraction, and obtains a subtraction image in which only the contrast agent and the stent remain (step S905”),
Tsukuda does not teach interpolating the removed artificial object region in the at least one radiation image by pixel values of surrounding regions to derive a first interpolated radiation image, and by deriving a difference between corresponding pixels of the at least one radiation image and the first interpolated radiation image.
CUI teaches interpolating the removed artificial object region in the at least one radiation image by pixel values of surrounding regions to derive a first interpolated radiation image (Cui ¶117 “the image features may be restored using a convolution operation and an interpolation algorithm on the image features.”
by deriving a difference between corresponding pixels of the at least one radiation image and the first interpolated radiation image. (CUI ¶151 “’..if a difference between the output of a training model and the reference binary image satisfies a predetermined difference requirement, updated model parameters of the training model may be obtained. “)
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, to modify the invention of Tsukuda to include the noted teachings of Cui in order to restore image features (CUI ¶119).
Pertinent Prior Art(s)
The prior art made of record though not relied upon in the current rejection is considered pertinent to applicant's disclosure:
US 20240081761 A1 TAKAHASHI; Tomoyuki
Claim 1. An image processing device comprising: at least one processor, wherein the processor specifies a target bone, which is a target of evaluation, by excluding a fracture and an artificial object in a bone part image in which at least a bone component of a subject is extracted, and derives an evaluation result indicating a state of a bone of the subject based on the target bone.
b. US 20090080755 A1 KAWAMURA; Takahiro et al.
[0020] Fig. 2, 4 composition types given in Fig. 2 (soft tissue bone,plaster catheter) and also [0022] where additional composition types are listed glass, plastic, metal, or the like
[0020] The image processor 20 performs extraction or removal of a specific object within the subject 12 from a radiation image by carrying out weighted subtraction using a plurality of pieces of radiation image information obtained at different radiation energies. The weighted subtraction is computed as
S=.alpha.S.sub.1+S.sub.2
where S is a resultant piece of radiation image information, S.sub.1 and S.sub.2 are pieces of radiation image information obtained with first and second image capturing conditions, respectively, and .alpha. is a weighting coefficient.
c. US 20220092787 A1 CUI; Kai et al.
CUI [0127]: “..the trained metal detection model may be a model used for determining a metal image with respect to the X-ray image. For example, the trained metal detection model may be trained from a neural network model for category semantic perception. The neural network model for category semantic perception may be a deep neural network model capable to recognize different types of target objects..”
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UMAIR AHSAN whose telephone number is (571)272-1323. The examiner can normally be reached Monday - Friday 10-5 PM EST or by emailing UMAIR.AHSAN@USPTO.GOV.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alison Slater can be reached on (571) 270-0375. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit:
https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UMAIR AHSAN/Primary Examiner, Art Unit 2647