DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 1, 6-7, 10, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN 112842371), hereinafter Chen, in view of Ye et al. (CN 102693540), hereinafter Ye, and further in view of Li et al. (Li, Zhuowei, et al., “Segmentation to Label: Automatic Coronary Artery Labeling from Mask Parcellation”, 2020), hereinafter Li et al.
As to Claim 1, Chen teaches a medical image processing apparatus (see paragraph [0004], “this disclosure provides at least one image processing method, apparatus, electronic device, and storage medium”),
a processor (see paragraph [0094], “this disclosure provides an electronic device, including: a processor”;
and a storage device that stores a program to be executed by the processor (see paragraph [0095], “this disclosure provides a computer readable storage medium storing a computer program, which, when run by a processor, performs the steps of the image processing method”)
wherein the program includes a trained model generated by performing machine learning (see paragraph using training data that includes first input data including a first image regarding a liver [0116], “For example, a three-dimensional liver image of the target object can be acquired, and the three-dimensional liver image can be input into a trained first neural network”),
and portal vein branch labeling data in which a portal vein branch label is attached to a portal vein region in the liver in the first image for each portal vein branch corresponding to a hepatic segment (see paragraph [0117], “And a second training sample labeled with hepatic vein branches and/or portal vein branches can be used to train the second neural network for multiple rounds until the trained second neural network meets the preset conditions”),
the trained model is a model obtained by updating parameters of a learning model (see paragraph [0117], “And a second training sample labeled with hepatic vein branches and/or portal vein branches can be used to train the second neural network for multiple rounds until the trained second neural network meets the preset conditions. The preset conditions can be that the accuracy is greater than a set accuracy threshold; or, the preset conditions can be that the loss value is less than a set loss threshold, etc. The accuracy threshold or loss threshold can be set as needed”, where the loss is a parameter generated by the learning model)
accept second input data which is a same type of input data as the first input data and includes a second image regarding the liver (see paragraph [0090], “an acquisition module, configured to acquire at least one detection image of a target object… and a second detection image of a second target region of the target object”, and see paragraph [0115], “the second target area can be the entire liver area”),
assign the portal vein branch label to each image unit element of a second image region of the second image (see paragraph [0271], “Furthermore, based on the second category corresponding to each second pixel to be classified, the image of the first target region can be hierarchically divided to obtain the target image, where the image of the first target region is the contour image corresponding to the portal vein segmentation region in the first detection image. For example, different colors can be used to label different second categories, and different categories of pixels on the portal vein segmentation region image can be distinguished by different colors, thus realizing the hierarchical division of the portal vein detection image”),
and divide a liver region included in the second input data into a plurality of the hepatic segments (see paragraph [0044], “Using the above method, the multiple liver segments included in the second detection image are divided according to the second liver region division method to generate a second liver segmentation image containing different liver region contour areas”, and see Fig. 4, showing different liver segments.
Chen fails to teach that the that the liver is divided on the basis of on the basis of the portal vein branch label assigned to each image unit element of the second image region. Instead, the liver segmentation is done first, and then the portal veins are classified based on their location within the liver segment (see paragraphs [0022] – [0025]).
However, Ye teaches a method in which portal veins of the liver are identified (see paragraph [0053], “segmentation module 40 carries out segmentation according to the center line of the said portal vein blood vessel behind the mark to said liver”),
and sections of the liver corresponding to each portal vein are marked (see paragraph [0058], “In step S203, the marking module 30 marks different liver segments on the center line of the portal vein blood vessel differently”),
and each imaging unit can be given a portal vein branch label to divide the liver (see paragraph [0101], “At last, repeat a step, all voxels on liver organization all obtain mark. In the regional corresponding liver of same tag value one section so then acquires the result of liver segmentation”, where each voxel is an imaging unit).
Ye is combinable with Chen as both are from the analogous field of medical image analysis. Thus, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the division method taught by Ye with the neural network taught by Chen. The motivation for doing so would be to increase the accuracy of division by accounting for differences in the position of portal veins between patients. Ye teaches in paragraph [0004], “Though Couinaud hepatic segments partitioning is practical; But obvious defects is arranged also… [it is] not suitable for the individual difference situation of clinical individual patient, and because there is great anatomic differences in branch of portal vein at aspects such as shape, size, numbers.”
Chen produces a classification for each imaging unit (pixel) in the liver image. However, the neural network itself does not produce the classification. Instead, the neural network is used to identify the portal veins, and then an algorithm is used to assign a portal vein label to each pixel according to the its corresponding hepatic segment. Additionally, the division process taught by Ye is automated, but not done by a neural network.
However, Li et al. teaches a model that receives annotated arteries, and then uses training and a loss function to predict labels (see page 132, Section 2.2, “During training, ground truth coronary artery mask is generated as follows: we first annotate tree-structured centerlines in segment-level, then each voxel in coronary artery mask is assigned with the same label as its nearest centerline point” and see page 133, section 2.2, “In general, normalized entropy of predicted labels within each segment is calculated, then entropies of all segments are weighted summed.”, where labels are predicted through the use of a loss function, which is part of a trainable model.
Li et al. is combinable with Chen and Yi since all three are from the analogous field medical image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the neural network of Li et al. with the teachings of Chen and Ye. The motivation for doing so would be to increase the robustness of the labels. Li et al teaches in on page 133, section 2.3, “To sum up, we perform a point-level voting following by a segment-level voting. By doing so, the final labeling result is impressively robust against noises and segmentation corruption.” Thus, it would have been obvious to combine the teachings of Chen, Ye, and Li et al. in order to obtain the invention as claimed in Claim 1.
As to Claim 6, Chen in view of Ye and Li et al. teaches wherein the first image region is an entire region of the first image, and the second image region is an entire region of the second image (see paragraph [0116], “For example, a three-dimensional liver image of the target object can be acquired, and the three-dimensional liver image can be input into a trained first neural network to obtain a three-dimensional liver segment detection image; and the three-dimensional liver image can be input into a trained second neural network to obtain a three-dimensional vein detection image”, where both images input into the neural networks contain the entire liver area).
As to Claim 7, Chen in view of Ye and Li et al. teaches wherein the portal vein branch label is a label for classifying the portal vein branch into eight classes corresponding to eight types of the hepatic segments from S1 to S8 (see paragraph see paragraph [0098], “According to this method, the nearest principles of pressing that hepatic tissue is all are divided in the corresponding vessel branch”, and see paragraph [0044], “Through experimental verification, the above-mentioned method of segmentation of liver segments conforms to Couinaud's segmentation of liver segments”, where Couinaud’s segmentation is a term of the art used to describe the eight types of hepatic segments.
As to Claim 10, Chen in view of Ye and Li et al. teaches wherein each of the first image and the second image is a three-dimensional image (see Chen, paragraph [0114] and [0115], “The first detection image can include: the hepatic vein detection image representing the hepatic vein contour information, and/or, the portal vein detection image representing the portal vein contour information …The hepatic vein detection image, portal vein detection image, and second detection image can be three-dimensional images, such as computed tomography (CT) images”).
As to Claim 14¸ Claim 14 claims a hepatic segment division method (see Chen, paragraph [0004], “this disclosure provides at least one image processing method” the method comprising that performs the same steps executed by the medical image processing apparatus claimed in Claim 1. . Therefore, the rejection and rationale are analogous to that made in Claim 1.
As to Claim 15, Claim 15 claims a non-transitory, computer-readable tangible recording medium (see paragraph [0095], “this disclosure provides a computer readable storage medium storing a computer program, which, when run by a processor, performs the steps of the image processing method as described in the first aspect or any embodiment above”), which records thereon a program that causes a computer to operate as a medical image processing apparatus, which is the same as the medical image processing apparatus as claimed in Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1.
Claim 2-5, 11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN 112842371), in view of Ye et al. (CN 102693540), further in view of Li et al. (Li, Zhuowei, et al., “Segmentation to Label: Automatic Coronary Artery Labeling from Mask Parcellation”, 2020), and further in view of Liu Li et al. (CN 111161241), hereinafter Li.
As to Claim 2, Chen in view of Ye and Li et al., the first input data includes at least one of a computed tomography (CT) image in which a region including the liver is imaged (see Chen, paragraph [0092], “CT is used to sample the patient's abdominal cavity at intervals to obtain a CT image sequence of multiple locations of the liver”).
Chen also teaches a portal vein detection image (see paragraph [0114], “The first detection image can include: the hepatic vein detection image representing the hepatic vein contour information, and/or, the portal vein detection image representing the portal vein contour information”), but fails to teach a mask image of the portal veins is created. However, Li teaches a hepatic vessel segmentation method (see abstract), that can be used to obtain a portal vein mask image sequence (see paragraph [0080], “Step 103: Based on the liver region mask image sequence and the liver vascular mask image sequence, determine the portal vein image sequence and the hepatic vein image sequence”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventio to combine the portal mask image sequence taught by Liu with the teachings of Chen, Ye, and Li et al. The motivation for doing so would be to would be to increase the segmentation accuracy and remove unnecessary body tissue. Liu teaches in paragraph [0079], “In this embodiment of the application, after obtaining the medical image sequence, since the medical image sequence also contains image content of other body tissues besides liver tissue, in order to improve the segmentation accuracy of the liver to be segmented, it is necessary to extract and identify the image of the liver tissue in the obtained medical image sequence to obtain the image of the liver region, that is, the liver region mask image sequence. In addition, in order to further improve the segmentation accuracy of the liver to be segmented, the blood vessels related to the liver in the medical image sequence are also extracted and identified to obtain the liver blood vessel mask image sequence”). Thus, it would have been obvious to combine the teachings of Li with the teachings of Chen, Ye, and Li et al. in order to obtain the invention as claimed in Claim 2.
As to Claim 3, Chen in view of Chen, Ye, and Li et al. teaches a CT image but fails to explicitly teach a portal vein mask image. However, Li teaches a hepatic vessel segmentation method that can be used to obtain a portal vein mask image sequence (see paragraph [0080], “Step 103: Based on the liver region mask image sequence and the liver vascular mask image sequence, determine the portal vein image sequence and the hepatic vein image sequence”).
As to Claim 4, Chen in view of fails to teach that the first input data further includes liver mask at least one of a liver mask image in which a liver region is specified, a vein mask image in which a vein region is specified, or an inferior vena cava mask image in which an inferior vena cava region is specified. However, Li teaches a liver mask image (see paragraph [0094], “When obtaining the liver region mask image sequence and the liver blood vessel mask image sequence, the pixel values belonging to the liver in each image of the medical image sequence are set to 1, and the corresponding pixels other than the liver are set as background. The corresponding background pixels can be set to 0, thus obtaining the liver region mask image sequence”). Thus, it would have been obvious to combine the teachings of Li with the teachings of Chen, Ye, and Li et al. in order to improve segmentation accuracy as earlier discussed in Claim 2.
As to Claim 5, Chen in view of Ye and Li fails to teach the first input data includes the portal vein mask image, the liver mask image, and the vein mask image. However, Liu teaches a portal vein mask image sequence (see paragraph [0080], “portal vein image sequence” ),
a liver mask image (see paragraph [0080], “liver region mask image sequence”),
and a vein mask image sequence (see paragraph [0080], “liver vascular mask image sequence").
Thus, it would have been obvious to combine the multiple masks taught by Li with the teachings of Chen in order to obtain the invention as claimed in Claim 5. The motivation for doing so would be to increase the segmentation accuracy, as earlier discussed in Claim 2.
As to Claim 11, Chen in view of Ye, and Li et al. fails to explicitly the processor performs labeling of a hepatic segment label indicating the hepatic segment on the basis of the portal vein branch label assigned to each image unit element of the second image region. Ye teaches a label can be assigned to each voxel based on the portal vein (see paragraph [], “According to this method, the nearest principles of pressing that hepatic tissue is all are divided in the corresponding vessel branch. It is the process of voxel classification on computers that hepatic segments is divided, and makes arbitrary voxel of liver that a segment value all arranged”), but fails to teach a separate label is created.
However Li teaches that another label corresponding to hepatic segments can be generated on the basis of the location (see paragraph [0183] and paragraph [0187], “Based on the upper and lower segment image sequences of the right portal vein, the right anterior region image sequence and the right posterior region image sequence are segmented sequentially to obtain the upper segment image sequence of the right anterior region, the lower segment image sequence of the right anterior region, the upper segment image sequence of the right posterior region, and the lower segment image sequence of the right posterior region in the target liver segment… Mark the upper segment image sequence of the left lateral region as segment 2, the lower segment image sequence of the left lateral region as segment 3, the left medial region image sequence as segment 4, the upper segment image sequence of the right anterior region as segment 5, the upper segment image sequence of the right posterior region as segment 6, the lower segment image sequence of the right posterior region as segment 7, and the lower segment image sequence of the right anterior region as segment 8”, where segments 2-8 labels for the hepatic segments”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the hepatic segment labels taught by Li with the teachings of Chen, Ye, and Li et al. The motivation for doing so would be to create accurate hepatic segment labels. Li teaches in paragraph [0005], “In order to solve the above-mentioned technical problems, the embodiments of this application expect to provide a liver image recognition method, electronic device and storage medium, which solves the problem that the liver part cannot be segmented automatically in the prior art, and improves the accuracy of the electronic device in segmenting the liver part.” Thus, it would have been obvious to one of ordinary skill in the art to combine the teaching of Li with the teachings of Chen, Ye, and Li et al. in order to obtain the invention as claimed in Claim 11.
As to Claim 13, Chen in view of Ye and Li et al. teaches a hepatic segment division image in which a region is divided into the hepatic segments (see Chen, Fig. 4, hepatic segments of the liver are labeled).
However, Chen in view of Ye and Li et al. fails to explicitly teach that the hepatic segments are generated by converting the portal vein branch label assigned to each image unit element of the second image region into the hepatic segment label.
Ye teaches a label can be assigned to each voxel based on the portal vein (see paragraph [], “According to this method, the nearest principles of pressing that hepatic tissue is all are divided in the corresponding vessel branch. It is the process of voxel classification on computers that hepatic segments is divided, and makes arbitrary voxel of liver that a segment value all arranged”), but fails to teach a separate label is created.
However Li teaches that another label corresponding to hepatic segments can be generated on the basis of the location (see paragraph [0183] and paragraph [0187], “Based on the upper and lower segment image sequences of the right portal vein, the right anterior region image sequence and the right posterior region image sequence are segmented sequentially to obtain the upper segment image sequence of the right anterior region, the lower segment image sequence of the right anterior region, the upper segment image sequence of the right posterior region, and the lower segment image sequence of the right posterior region in the target liver segment… Mark the upper segment image sequence of the left lateral region as segment 2, the lower segment image sequence of the left lateral region as segment 3, the left medial region image sequence as segment 4, the upper segment image sequence of the right anterior region as segment 5, the upper segment image sequence of the right posterior region as segment 6, the lower segment image sequence of the right posterior region as segment 7, and the lower segment image sequence of the right anterior region as segment 8”, where segments 2-8 labels for the hepatic segments”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to convert the portal vein labels taught by Ye to the hepatic segment labels taught by Li. The motivation for doing so would be to increase the accuracy of hepatic segmentation. Li teaches in paragraph [0005], “In order to solve the above-mentioned technical problems, the embodiments of this application expect to provide a liver image recognition method, electronic device and storage medium, which solves the problem that the liver part cannot be segmented automatically in the prior art, and improves the accuracy of the electronic device in segmenting the liver part.” Thus, it would have been obvious to combine the teachings of Li with the teachings of Chen, Ye and Li et al. in order to obtain the invention as claimed in Claim 13.
Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN 112842371), in view of Ye et al (CN 102693540), further in view of Li et al. (Li, Zhuowei, et al., “Segmentation to Label: Automatic Coronary Artery Labeling from Mask Parcellation”, 2020), and further in view of Ke-Feng Li et al. (CN 112733708), hereinafter Ke-Feng.
As to Claim 8, Chen in view of Ye and Li et al. fails to teach wherein the trained model is configured using a convolutional neural network. However, Ke-Feng teaches a convolutional neural network (see paragraph [0069], “the data is processed by a convolutional neural network”) which can be used to identify hepatic portal veins (see paragraph [0069], “extracting features through convolution operations, and using the softmax function to calculate the probability of whether the extracted feature is the portal vein during prediction”. Ke-Feng is combinable with Chen, Ye, and Li et al. since all four are from the analogous field of medical image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ke-Feng with the teachings of Chen, Ye, and Li et al. The motivation for doing so would be to reduce the need for long manual annotation times and increase the accuracy of the model. Ke-Feng teaches paragraph [0005], “The purpose of this invention is to provide a method and system for portal vein detection and localization based on semi-supervised learning, aiming to solve the problems of long manual annotation time and weak targeting of portal vein detection and localization in the prior art, thereby reducing the cost of manual annotation and improving the localization accuracy in a targeted manner”. Thus, it would have been obvious to one of ordinary skill to combine convolutional neural network taught by Ke-Feng with the teachings of Chen, Ye, and Li et al. in order to obtain the invention as claimed in Claim 8.
As to Claim 9, Chen teaches calculating a loss only for a portal vein region in which the portal vein branch label is attached, in the portal vein branch labeling data corresponding to the first input data (see [0117] of Chen, "And a second training sample labeled with hepatic vein branches and/or portal vein branches can be used to train the second neural network for multiple rounds until the trained second neural network meets the preset conditions. The preset conditions can be that the accuracy is greater than a set accuracy threshold; or, the preset conditions can be that the loss value is less than a set loss threshold, etc. The accuracy threshold or loss threshold can be set as needed. ")
However, Chen fails to teach a score map indicating a probability of the portal vein branch label output from the learning model. However, Ke-Feng teaches that a convolutional neural network that can calculate the probability that a feature is a portal vein (see paragraph [0069], “the data is processed by a convolutional neural network with added prior knowledge, extracting features through convolution operations, and using the softmax function to calculate the probability of whether the extracted feature is the portal vein during prediction”, and multiple features can contain scores, thus creating a ‘score map’ for an image containing multiple features). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the convolutional neural network taught by Ke-Feng with the teachings of Chen, Ye, and Li et al. The motivation for doing so would be to reduce the need for long manual annotation time and increase the accuracy of the model, as discussed earlier in Claim 8.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Chen (CN112842371), in view of Ye et al. (CN 102693540), further in view of Li et al. (Li, Zhuowei, et al., “Segmentation to Label: Automatic Coronary Artery Labeling from Mask Parcellation”, 2020), further in view of Liu Li et al. (CN111161241), hereinafter Li, and further in view of Masahiko et al. (JP 2003070782), hereinafter Masahiko.
As to Claim 12, Chen in view of Ye, and Li et al. fails to explicitly the processor extracts a liver region from the CT image included in the second input data. However, Li teaches that a liver region can be extracted by using a liver mask (see paragraph [0094], “When obtaining the liver region mask image sequence and the liver blood vessel mask image sequence, the pixel values belonging to the liver in each image of the medical image sequence are set to 1, and the corresponding pixels other than the liver are set as background. The corresponding background pixels can be set to 0, thus obtaining the liver region mask image sequence”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Li with the teachings of Chen Ye, and Li et al. The motivation for doing so would be to improve the accuracy of the segmentation of the liver, as taught by Li in paragraph [0079].
Chen in view of Ye, Li et al, and Li fails to teach that label information labeled for a region other than the extracted liver region may be in validated. However, Masahiko teaches an image processor for computer tomography images (see abstract) that can remove unnecessary labels from regions labeled outside of a target region (see paragraph [0009], “Therefore, the tomographic image is converted into a binary image by the binary image creating means, and then the binary image is labeled by the labeling means. The labeling process is a process in which all connected pixels (connected components) are given the same value, and different connected components are given different values. Then, the maximum connected component extracting means extracts the maximum connected component (area having the maximum area) among the connected components, and the contour creating means creates the outermost contour of the maximum connected component. The deleting means deletes the pixels existing outside the outermost contour of the maximum connected component from the tomographic image. As a result, only the tomographic image of the region of the subject necessary for diagnosis remains”). Masahiko is combinable with Chen in view of Ye, Li et al, and Li since all are from the analogous field of medical image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the label removal process taught by Aoyanagi with the teachings of Chen, Ye, Li et al, and Li. The motivation for doing so would be to only include relevant information within the image, as taught by Masahiko in paragraph [0009].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zhang et al. (Zhang, Qin, et al., “An Efficient and Clinical-Oriented 3D Liver Segmentation Method”, 2017) teaches an algorithm for extracting hepatic portal veins, labelling the portal veins according to their respective hepatic segment, and then segmenting the liver according to the portal veins.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOUMYA THOMAS whose telephone number is (571)272-8639. The examiner can normally be reached M-F 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.T./ Examiner, Art Unit 2664
/JENNIFER MEHMOOD/ Supervisory Patent Examiner, Art Unit 2664