Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED OFFICE ACTION
Status of Claims:
Claims 1-15 are pending examination.
Claim Objection
Claim 4 is objected due to minor spelling error “ image into a depth prdiction” .
Applicant is requested to correct the error to : “ image into a depth prediction”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b) (2) (C) for any potential 35 U.S.C. 102(a) (2) prior art against the later invention.
1. Claims 1 and 10 are rejected under 35 U.S.C 103(a) as being unpatentable over Abhinav Valada et al. ( NPL DOC: "Self-Supervised Model Adaptation for Multimodal Semantic Segmentation," 8th July 2019, International Journal of Computer Vision (2020) 128,Pages 1239-1257.) in view of Yawen Lu et al. ( NPL DOC: "Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network,"15th September 2020, IEEE International Conference on Robotics and Automation (ICRA),Robotics and Automation (ICRA),31 May - 31 August, 2020. Paris, France,Pages 10788-10792.).
As per claim 1, Abhinav Valada et al. teaches A multimodal neural network model, NNM ( Page 1240- Col. 2- “…architecture for multimodal segmentation consists of individual modality-specific encoder streams which are fused both at mid-level stages and at the end of the encoder streams using our SSMA blocks. The fused representations are input to the decoder at different stages for upsampling and refining the predictions. Note that only the multimodal SSMA fusion mechanism is self-supervised, the semantic segmentation is trained in a supervised manner….”) , comprising: an encoder( Page 1245- FIG.3 – Encoder And Page 1244- Col. 2- “…3.1 Encoder- “…proposed architecture for multimodal fusion, our objective is to design a topology that has a reasonable model size so that two individual modality-specific networks can be trained in a fusion framework and deployed on a single GPU….”) ; a depth decoder Page 1244- Col. 2- “…Finally, the output of the eASPP is fed into our proposed deep decoder with skip connections for upsampling and refining the semantic pixel-level prediction (Color figure online)…” And Page 1252- Fig. 9 showing decoder for decoding and estimation of depth And Page 1247- Col. 1- “…Our decoder shown in Fig. 5 consists of three stages. In the first stage, the output of the eASPP is upsampled by a factor of two using a deconvolution layer to obtain a coarse segmentation mask. The upsampled coarse mask is then passed through the second stage, where the feature maps are concatenated with the first skip refinement from Res3d. The skip refinement consists of a 1 × 1 convolution layer to reduce the feature depth in order to not outweigh the encoder features….”) ;
Abhinav Valada et al. does not explicitly teach a semantic segmentation decoder coupled to the encoder operable to determine semantic labels from the image.
However, within analogous art, Yawen Lu et al. teaches a semantic segmentation decoder coupled to the encoder operable to determine semantic labels from the image ( Encoder decoder pair taught within Page 10789-Fig. 2 And Col. 2- “…Based on FCN, a variety of research is conducted to perform semantic segmentation…using training labels. Our method can use any training data without labels. In [19], an efficient algorithm for fully connected conditional random field model is utilized to refine the output segmentation map….” And III – JOINT LEARNING FRAMEWORK) .
One of ordinary skill in the art would have been motivated to combine the teaching of Yawen Lu et al. within the modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. because the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. provides a method and system for implementation of depth and segmentation learning from images utilizing deep neural network model.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. within the modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. for implementing a system and method for depth and segmentation learning from images utilizing deep neural network model.
As per claim 10, Abhinav Valada et al. teaches A method( Page 1240- Col. 2- “…architecture for multimodal segmentation consists of individual modality-specific encoder streams which are fused both at mid-level stages and at the end of the encoder streams using our SSMA blocks. The fused representations are input to the decoder at different stages for upsampling and refining the predictions. Note that only the multimodal SSMA fusion mechanism is self-supervised, the semantic segmentation is trained in a supervised manner….”), for semantic segmentation and depth estimation comprising: receiving and encoding, at an encoder( Page 1245- FIG.3 – Encoder And Page 1244- Col. 2- “…3.1 Encoder- “…proposed architecture for multimodal fusion, our objective is to design a topology that has a reasonable model size so that two individual modality-specific networks can be trained in a fusion framework and deployed on a single GPU….”),
Abhinav Valada et al. does not explicitly teach a plurality of images; sending the encoded images to a depth decoder and a semantic segmentation decoder, wherein both the depth decoder and the semantic segmentation decoder are coupled to the encoder; estimating, at the depth decoder, the depths from the images ; comparingat the semantic segmentation decoder, semantic labels from the images; comparing the determined semantic labels of the images with the actual labels of the images to calculate a semantic segmentation loss; and optimising the depth loss and segmentation loss.
However, within analogous art, Yawen Lu et al. teaches a plurality of images ( Page 10789- Fig. 2- shows multiple images) ; sending the encoded images to a depth decoder and a semantic segmentation decoder (Page 10788- Fig. 1 – teaches Encoder coupled with dual Decoder for depth and semantic segmentation) , wherein both the depth decoder and the semantic segmentation decoder are coupled to the encoder(Page 10788- Fig. 1 – teaches Encoder coupled with dual Decoder for depth and semantic segmentation, And Col. 2- “…unsupervised multi-task learning framework for simultaneous single image depth estimation and image segmentation….” ); estimating, at the depth decoder, the depths from the images( Page 10789- Col. 1- “…Single Image Depth Estimation based on deep neural network mainly relies on ground truth labels to train the model, which generates promising results. Eigen et al. [8] used two deep networks to perform a multi-scale deep network with a scale-invariant loss function for depth estimation…”); comparing( Page 10789- Fig. 2 showing multiple images input and taught within Col. 1- “…work mainly relies on ground truth labels to train the model, which generates promising results. Eigen et al. [8] used two deep networks to perform a multi-scale deep network with a scale-invariant loss function for depth estimation. … deal with the depth estimation problem based on deep CNN and Conditional Random Field (CRF) learning. A sequential network using CRF and CNN is then deployed for single depth estimation…”) ; determining, at the semantic segmentation decoder, semantic labels from the images ( Pages 10790- Col. 2- “…our proposed method can adaptively determine the number of segments in multiple different scene images by utilizing the pixel-wise cluster labels predicted from the deep neural network in the first step.…”) ; comparing the determined semantic labels of the images with the actual labels of the images to calculate a semantic segmentation loss ( Page 10791- Col. 1- “…where Lmatch, Lsmooth, Lconsis and Lseg are appearance matching loss, disparity smoothness loss, consistency loss and segmentation loss respectively that constrain the single image depth estimation and segmentation…”) ; and optimising the depth loss and segmentation loss ( Pages 10790- Col. 2- “…we train a network for jointly single image depth estimation and image segmentation. Our scheme optimizes the network based on multiple spatial and spectral constraints through a weighted sum of each loss term with an L2 regularization:…” ).
One of ordinary skill in the art would have been motivated to combine the teaching of Yawen Lu et al. within the modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. because the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. provides a method and system for implementation of depth and segmentation learning from images utilizing deep neural network model.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. within the modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. for implementing a system and method for depth and segmentation learning from images utilizing deep neural network model.
2. Claims 2,3,7,8 and 11 are rejected under 35 U.S.C 103(a) as being unpatentable over Abhinav Valada et al. ( NPL DOC: "Self-Supervised Model Adaptation for Multimodal Semantic Segmentation," 8th July 2019, International Journal of Computer Vision (2020) 128,Pages 1239-1257.) in view of Yawen Lu et al. ( NPL DOC: "Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network,"15th September 2020, IEEE International Conference on Robotics and Automation (ICRA),Robotics and Automation (ICRA),31 May - 31 August, 2020. Paris, France,Pages 10788-10792.) in further view of Mark Sandler et al. ( NPL DOC: "MobileNetV2: Inverted Residuals and Linear Bottlenecks," June 2018, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, Pages 4510-4517.).
As per claim 2, Combination of Abhinav Valada et al. and Yawen Lu et al. teach claim 1,
Abhinav Valada et al. teaches wherein the encoder is a convolutional neural network ( Page 1244- Col. 2- “…Encoders are the foundation of fully convolutional neural network architectures. Therefore, it is essential to build upon a good baseline that has a high representational ability conforming with the computational budget. Our critical requirement is to achieve the right trade-off between the accuracy of segmentation and inference…”) comprising: a first layer operable to receive the image and subsequently perform convolution ( Page 1245- Fig. 3 teaches input image to encoder of convolution network ) , batch normalisation and a non-linearity function on the image ( Page 1246- Col. 1- “….followed by a 1×1 convolution and bilinear upsampling to yield an output with the same dimensions
as the input feature map. All the convolutions have 256 filters and batch normalization layers to improve training. Finally, the resulting feature maps from each of the parallel branches are concatenated and passed through another 1×1 convolution with batch normalization to yield 256 output filters….”) ;
Abhinav Valada et al. does not explicitly teaches a second layer following the first layer, the second layer comprising a plurality of inverted residual blocks each operable to perform depthwise convolution on the image; and a third layer
However, within analogous art, Mark Sandler et al. teaches a second layer following the first layer, the second layer comprising a plurality of inverted residual blocks each operable to perform depthwise convolution on the image ( Page 4511- Col. 1- “…full convolutional operator with a factorized version that splits convolution into two separate layers. The first layer is called a depthwise convolution, it performs lightweight filtering by applying a single convolutional filter per input channel. The second layer is a 1 × 1 convolution, called a pointwise convolution, which is responsible for building new features through computing linear combinations of the input channels….”) ; and a third layer( Page 4512- Figure 3- “…how classical residuals connects the layers with high number of channels, whereas the inverted residuals connect the bottlenecks…”) , batch normalisation and non-linearity functions on the image ( Page 4515- Col. 1- “…We use the standard RMSProp Optimizer with both decay and momentum set to 0.9.We use batch normalization after every layer, and the standard weight decay is set to 0.00004….”) .
One of ordinary skill in the art would have been motivated to combine the teaching of Mark Sandler et al. within the combined modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. and the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. because the MobileNetV2: Inverted Residuals and Linear Bottlenecks mentioned by Mark Sandler et al. provides a method and system for implementation of depthwise convolutions to filter features as a source of non-linearity.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the MobileNetV2: Inverted Residuals and Linear Bottlenecks mentioned by Mark Sandler et al. within the combined modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. and the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. for implementing a system and method for depthwise convolutions to filter features as a source of non-linearity.
As per claim 3, Combination of Abhinav Valada et al. and Yawen Lu et al. and Mark Sandler et al. teach claim 2,
Combination of Abhinav Valada et al. and Mark Sandler et al. does not explicitly teach wherein the non-linearity function is Relu6.
Within analogous art, Yawen Lu et al. teaches wherein the non-linearity function is Relu6 ( Page 10792- Col.1 –“… encoder model consisting of 7 convolutional layers with Rectified Linear Units (ReLU) as the non-linear activation functions for all the convolutional layers….”) .
As per claim 7, Combination of Abhinav Valada et al. and Yawen Lu et al. and Mark Sandler et al. teach claim 2,
Combination of Abhinav Valada et al. and Yawen Lu et al. does not explicitly teach wherein the image is a three-dimensional tensor with an input shape of 3 x H x W, wherein 3 represents the dimension, H represents the height, and W represents the width of the image.
Within analogous art, Mark Sandler et al. teaches wherein the image is a three-dimensional tensor with an input shape of 3 x H x W, wherein 3 represents the dimension, H represents the height, and W represents the width of the image ( Page 4511- Col. 2- “…Consider a deep neural network consisting of n layers Li each of which has an activation tensor of dimensions hi × wi × di. Throughout this section we will be discussing the basic properties of these activation tensors, which we will treat as containers of hi × wi “pixels” with di dimensions. Informally, for an input set of real images, we say that the set of layer activations (for any layer Li) forms a “manifold of interest”. It has been long assumed that manifolds of interest in neural networks could be embedded in low-dimensional subspaces….”) .
One of ordinary skill in the art would have been motivated to combine the teaching of Mark Sandler et al. within the combined modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. and the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. because the MobileNetV2: Inverted Residuals and Linear Bottlenecks mentioned by Mark Sandler et al. provides a method and system for implementation of depthwise convolutions to filter features as a source of non-linearity.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the MobileNetV2: Inverted Residuals and Linear Bottlenecks mentioned by Mark Sandler et al. within the combined modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. and the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. for implementing a system and method for depthwise convolutions to filter features as a source of non-linearity.
As per claim 8, Combination of Abhinav Valada et al. and Yawen Lu et al. and Mark Sandler et al. teach claim 7,
Abhinav Valada et al. teaches wherein the semantic segmentation decoder outputs a score map with the dimension of C x H x W, wherein C represents the number of semantic classes ( Page 1249- Col. 2- “…We represent the training set for multimodal semantic segmentation as T = {(In, Kn, Mn) | n = 1, . . . , N}, where In = {ur | r = 1, . . . , ρ} denotes the input frame from modality a, Kn = {kr | r = 1, . . . , ρ} denotes the corresponding input frame from modality b and the groundtruth label is given by Mn = {mr | r = 1, . . . , ρ}, where mr ∈ {1, . . . ,C} is the set of semantic classes. The image In is only shown to the modality-specific encoder Ea and similarly, the corresponding image Kn from a complementary modality is only shown to the modality-specific encoder Eb…”) .
As per claim 11, Combination of Abhinav Valada et al. and Yawen Lu et al. teaches claim 10,
Abhinav Valada et al. teaches wherein the encoder is a convolutional neural network ( Page 1244- Col. 2- “…Encoders are the foundation of fully convolutional neural network architectures. Therefore, it is essential to build upon a good baseline that has a high representational ability conforming with the computational budget. Our critical requirement is to achieve the right trade-off between the accuracy of segmentation and inference…”) comprising: a first layer operable to receive the image and subsequently perform convolution ( Page 1245- Fig. 3 teaches input image to encoder of convolution network ), batch normalisation and a non-linearity function on the images( Page 1246- Col. 1- “….followed by a 1×1 convolution and bilinear upsampling to yield an output with the same dimensions
as the input feature map. All the convolutions have 256 filters and batch normalization layers to improve training. Finally, the resulting feature maps from each of the parallel branches are concatenated and passed through another 1×1 convolution with batch normalization to yield 256 output filters….”);
Combination of Abhinav Valada et al. and Yawen Lu et al. does not explicitly teaches a second layer following the first layer, the second layer comprising a plurality of inverted residual blocks each operable to perform depthwise convolution on the images; and a third layer following the second layer, the third layer operable to perform convolution, batch normalisation and non-linearity functions on the images.
However, within analogous art, Mark Sandler et al. teaches a second layer following the first layer, the second layer comprising a plurality of inverted residual blocks each operable to perform depthwise convolution on the images ( Page 4511- Col. 1- “…full convolutional operator with a factorized version that splits convolution into two separate layers. The first layer is called a depthwise convolution, it performs lightweight filtering by applying a single convolutional filter per input channel. The second layer is a 1 × 1 convolution, called a pointwise convolution, which is responsible for building new features through computing linear combinations of the input channels….”) ; and a third layer following the second layer, the third layer operable to perform convolution ( Page 4512- Figure 3- “…how classical residuals connects the layers with high number of channels, whereas the inverted residuals connect the bottlenecks…”) , batch normalisation and non-linearity functions on the images ( Page 4515- Col. 1- “…We use the standard RMSProp Optimizer with both decay and momentum set to 0.9.We use batch normalization after every layer, and the standard weight decay is set to 0.00004….”) .
One of ordinary skill in the art would have been motivated to combine the teaching of Mark Sandler et al. within the combined modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. and the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. because the MobileNetV2: Inverted Residuals and Linear Bottlenecks mentioned by Mark Sandler et al. provides a method and system for implementation of depthwise convolutions to filter features as a source of non-linearity.
Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the MobileNetV2: Inverted Residuals and Linear Bottlenecks mentioned by Mark Sandler et al. within the combined modified teaching of the Self-Supervised Model Adaptation for Multimodal Semantic Segmentation mentioned by Abhinav Valada et al. and the Multi-Task Learning for Single Image Depth Estimation and Segmentation Based on Unsupervised Network mentioned by Yawen Lu et al. for implementing a system and method for depthwise convolutions to filter features as a source of non-linearity.
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123.
Allowable Subject Matter
3. Claims 4,5,6,9,12,13,14 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
4. The following is an examiner’s statement of reasons for objecting the claims as allowable subject matter:
As to claim 4 , prior art of record does not teach or suggest the limitation mentioned within claim 4 : “…five sequential upsample block layers, each of the five sequential upsample block layers operable to perform depthwise convolution and pointwise convolution on the image received from the encoder; a sixth layer following the five sequential upsample block layers, the sixth layer operable to perform a further pointwise convolution and a sigmoid function on the image; and a seventh layer comprising logic operable to convert the sigmoid output of the image into a depth prediction.”
As to claim 5, prior art of record does not teach or suggest the limitation mentioned within claim 5 : “…five sequential upsample block layers, each of the five sequential upsample block layers operable to perform depthwise convolution and pointwise convolution on the image received from the encoder; a sixth layer following the five sequential upsample block layers, the sixth layer operable to perform a further pointwise convolution on the image; and a seventh layer comprising logic operable to receive a score map from the sixth layer and subsequently to determine segments of the image by taking an arg max of each score pixel vector of the image.”
As to claim 6, claim 6 depends on objected allowable claim 5, therefore claim 6 is objected as allowable over prior art of record.
As to claim 9 , prior art of record does not teach or suggest the limitation mentioned within claim 9 : “wherein the depth estimation decoder outputs a response map with the dimension of 1 x H x W.”
As to claim 12 , prior art of record does not teach or suggest the limitation mentioned within claim 12 : “…five sequential upsample block layers, each of the five sequential upsample block layers operable to perform depthwise convolution and pointwise convolution on the image received from the encoder; a sixth layer following the fifth layer, the sixth layer operable to perform a further pointwise convolution and a sigmoid function on the image; a seventh layer comprising logic operable to convert the sigmoid output of the image into a depth prediction; and the semantic segmentation decoder is a convolutional neural network comprising: five sequential upsample block layers, each of the five sequential upsample block layers operable to perform depthwise convolution and pointwise convolution on the image received from the encoder; a sixth layer following the fifth layer, the sixth layer operable to perform a further pointwise convolution on the image; and a seventh layer comprising logic operable to receive a score map from the sixth layer and subsequently to determine segments of the image by taking an argmax of each score pixel vector of the image.”
As to claims 13 and 14 , claims 13 and 14 depends on objected allowable claim 12, therefore claims 13 and 14 are objected as allowable over prior art of record.
As to claim 15, prior art of record does not teach or suggest the limitation mentioned within claim 15 : “wherein the depth loss and segmentation loss is optimised such that total loss is equivalent to 0.02 times the sum of the depth loss and the semantic segmentation loss.”
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMAR S ISMAIL whose telephone number is (571)272-9799 and Fax # is (571)273-9799. The examiner can normally be reached on M-F 9:00am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David C. Payne can be reached on ((571) 272-3024. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OMAR S ISMAIL/
Primary Examiner, Art Unit 2635