DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
Paragraphs [0015] and [0018] recites “received IM image”. It is unclear and confusing what IM stands for and it is unclear if the applicant is referring to the manufactured item (MI) image and received IM image. It is further unclear if IM is a typo for MI, please make appropriate corrections.
Claim Objections
Claims 7 and 15 are objected to because of the following informalities:
Claim 7 recites “suppression process on bounding boxes”. Since bounding boxes are defined previously in claim 1, it should read as “suppression process on the bounding boxes” (emphasis added with underline).
Claim 15 recites “suppression process on bounding boxes”. Since bounding boxes are defined previously in claim 1, it should read as “suppression process on the bounding boxes” (emphasis added with underline).
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 – 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the pixel predictors" in fifth limitation. There is insufficient antecedent basis for this limitation in the claim as there is no prior definition for “pixel predictors” and it is unclear and confusing to one of the ordinary skill in the art.
Claims 2 – 8 are rejected for being dependent on rejected base claim 1.
Claim 9 recites the limitation "the pixel predictors" in fifth limitation. There is insufficient antecedent basis for this limitation in the claim as there is no prior definition for “pixel predictors” and it is unclear and confusing to one of the ordinary skill in the art.
Claims 10 – 16 are rejected for being dependent on rejected base claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1 – 3, 7 – 11 and 15 – 16 are rejected under 35 U.S.C. 103 as being unpatentable over Dey et al. (US 20230343078 A1; hereafter referred to as Dey) in view of Isken et al. (US 20220084181 A1; hereafter referred to as Isken).
Regarding Claim 1, Dey teaches:
A method for defect detection prediction with a compact set of prediction channels (Dey, [0038] “a computer-implemented training and prediction method for the classification, localization and instance segmentation of defects in image data is disclosed”), the method comprises:
obtaining a manufactured item (MI) image ([0049] “images obtained by an inspection tool, e.g. SEM images”; [0050] “The image data in the dataset relates to SEM raw images (1024 pixels×1024 pixels) of line patterns (32 nm pitch) on a photolithographically exposed resist wafer”);
generating, by a machine learning process, pixel predictions per multiple pixels of one or more feature maps related to the MI image (Dey, [0050] “Defects are distributed stochastically on the test structures undergoing SEM inspection, i.e. defect classes, locations and defect features such as area, length, pixel distribution are distributed randomly”; Dey, [0051] The feature extraction modules of each learning structure may operate on the entire input image of the training set that has been currently applied to the machine learning model.... During the forward pass, the convolutional layers of the feature extraction modules generate a feature map for the applied input image of the training set”; Dey, [0069] “Each selected learning structure proposes predictions for the possible defect candidates, including the defect class, the precise localization of the defect instance relative to the input image (e.g. as indicated via a bounding box), and also the instance segmentation mask”));
wherein the pixel predictions consist essentially of a probability (P) of defect, bounding height (H) and bounding box width (W) (Dey, [0051] “A bounding box regressor may be trained to accurately predict the bounding box dimensions and offsets of the proposed region of interest”; Dey, [0053] The detection module then determines the class label or class probability as well as the corresponding bounding box for each defect present in the region of interests proposed by the region proposal network for a given input image of the image dataset”; [0069] “ More information in respect of the detected defect could be extracted, for example area of the defect, defect height or width, overall defect density defined as total area of all defects (belonging to all classes or per-class) divided by the total area of the actively processed mask (e.g. resist or etch mask), defect perimeter, defect diameter, and defect polygon shape);
wherein the machine learning process was trained to (i) detect defects bounded by bounding boxes that have selected aspect ratios, and (ii) ignore defects bounded by bounding boxes that have non-selected aspect ratios (Dey, [0051] “The region proposal module acts directly on the generated feature map. The region proposal module generates anchor boxes of different scales and aspect ratios in each point of the feature map. A bounding box regressor may be trained to accurately predict the bounding box dimensions and offsets of the proposed region of interest from the bundle of anchor boxes associated with each scale factor and with each point in the feature map”; Dey [0053] The detection module then determines the class label or class probability as well as the corresponding bounding box for each defect present in the region of interests proposed by the region proposal network for a given input image of the image dataset. Defect-free images only contain background objects that are not forwarded by the region proposal network”; Dey, [0082] “a detection module adapted to detect defects in each one of the identified regions of interest in the input image and to predict a defect class associated with each one of the detected defects);
While Dey teaches defects distribution with respect to defect classes and pixel distribution (Dey, [0050] “Defects are distributed stochastically on the test structures undergoing SEM inspection, i.e. defect classes, locations and defect features such as area, length, pixel distribution are distributed randomly”) and presents the final prediction to the user (Dey, [0069] “The final prediction may be presented to the user either in text form, e.g. XML file containing defect instances with annotating labels relating to the defect class, bounding box coordinates and a list or array of Boolean variables for each pixel in the bounding box, which indicate whether the corresponding pixel forms part of the instance segmentation mask, or may be presented visually, e.g. an annotated image file”), it fails to explicitly teach:
selecting, out of the multiple pixels, pixels based on values of at least one of the pixel predictors to provide a plurality of selected pixels;
determining, based on the selected pixels, suspected defect bounding boxes; and
responding to the determining.
In the same field of endeavor, Isken teaches:
selecting, out of the multiple pixels, pixels based on values of at least one of the pixel predictors to provide a plurality of selected pixels (Isken, [0052] “. A coating defect semantic label may look like “bubble defect(s) identified at pixels {xy coordinates of all pixels in the digital image depicting a bubble defect}” or “cratering defect(s) identified at pixels {xy coordinates of all pixels in the digital image depicting a cratering defect}”. A coating defect semantic label identifies the location of one or more defect types identified in the image at the pixel level and hence allows a rough quantification of the extent of the defect”; Isken, [0054] “ A coating defect instance label identifies individual instances of one or more defect types identified in the image and in addition identifies the pixel positions of these defect instances in the image”; Isken, [0055] Identifying the approximate (bounding boxes) or detailed (pixel-based) location of defect types and/or defect type instances may have the advantage that the positional information can be further-processed easily by the defect-identification program, e.g. for computing the fraction of pixels of an image covered by the defect”);
determining, based on the selected pixels, suspected defect bounding boxes (Isken, [0236] “A measure of a defect can be obtained e.g. by analyzing the dimensions or other properties of the identified defects (e.g. based on surrounding bounding boxes and/or based on individual pixels”); and
responding to the determining (Isken, [0071] “the defects identification program performs the outputting of the computed characterization via the GUI or another output interface of the data processing system”; Isken, [0275] “The data obtained in step 104 may be output to a user and/or may be used internally by the defect-identification program for computing derivative data values, e.g. aggregated coating surface characterizations”; Isken, [0303] The predictive model M1 (or further predictive models M1.2 comprised in the defect-identification program) will also learn correlations between pixel patterns, coating defects/coating surface characterizations and the additional data such as the components and/or amounts of the composition components, manufacturing process parameters and/or application process parameters”; Isken, [0305] “The trained model can be integrated in a defect-identification program 124 and used for automatically identifying coating defects depicted in digital images. The program 124 may comprise additional functionalities, e.g. a GUI 614 for assisting a user in acquiring images during training and/or test phase and/or for displaying the prediction result to a user, e.g. in the form of numerical values and/or segmented images”).
Dey and Isken are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Dey with the method of Isken to make the invention that selects the pixels based on values of at least one of the pixel predictors, determines the suspected defective bounding boxes and responds to the determining; doing so can efficiently identifying and characterizing the defects (Isken, [0004] – [0005]); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 2, Dey in view of Isken teaches the method according to claim 1, wherein the selected aspect ratios are learnt during a supervised training process (Dey, [0044] The objectness classifier and the bound box regressor associated with each anchor may be configured to distinguish, by supervised learning, foreground objects from the image background and to align and size a bounding box associated with the objects classified as foreground objects”).
Regarding Claim 3, Dey in view of Isken teaches the method according to claim 2, wherein the selected aspect ratios are learnt during a supervised training process by clustering aspect ratios of tagged defects in a training dataset (Isken, [0065] “Clustering: the method comprises separating an image into similar pixel segments with clustering techniques such as k-means”).
Regarding Claim 7, Dey in view of Isken teaches the method according to claim 1, wherein the determining comprises applying a non-maximum suppression process on bounding boxes associated with the selected pixels (Dey, [0044] “Non-maximum suppression (NMS) may be applied to reduce the number of proposals and only a predetermined number of top-ranked proposals (by objectness classification score) may be used as inputs to the respective region pooling module”).
Regarding Claim 8, Dey in view of Isken teaches the method according to claim 1, wherein the one or more feature maps are multiple feature maps associated with different spatial resolutions (Dey, [0042] The region proposal modules are adapted to output a collection of regions of interest, based on the feature map of the respective feature extractor module as input. In other words, the region proposal modules act directly on the feature maps generated by the extractors”. Fig. 2 and Fig. 3 difference resolution).
Regarding Claim 9, Dey teaches:
A non-transitory computer readable medium for defect detection prediction with a compact set of prediction channels, (Dey, [0038] “a computer-implemented training and prediction method for the classification, localization and instance segmentation of defects in image data is disclosed”; Dey, [0088] “computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware), the non-transitory computer readable medium stores instructions that cause a processor to:
receive a manufactured item (MI) image ([0049] “images obtained by an inspection tool, e.g. SEM images”; [0050] “The image data in the dataset relates to SEM raw images (1024 pixels×1024 pixels) of line patterns (32 nm pitch) on a photolithographically exposed resist wafer”);
generate, by applying a machine learning process, pixel predictions per multiple pixels of one or more feature maps related to the MI image (Dey, [0050] “Defects are distributed stochastically on the test structures undergoing SEM inspection, i.e. defect classes, locations and defect features such as area, length, pixel distribution are distributed randomly”; Dey, [0051] The feature extraction modules of each learning structure may operate on the entire input image of the training set that has been currently applied to the machine learning model.... During the forward pass, the convolutional layers of the feature extraction modules generate a feature map for the applied input image of the training set”; Dey, [0069] “Each selected learning structure proposes predictions for the possible defect candidates, including the defect class, the precise localization of the defect instance relative to the input image (e.g. as indicated via a bounding box), and also the instance segmentation mask”);
wherein the pixel predictions consist essentially of a probability (P) of defect, bounding height (H) and bounding box width (W) (Dey, [0051] “A bounding box regressor may be trained to accurately predict the bounding box dimensions and offsets of the proposed region of interest”; Dey, [0053] The detection module then determines the class label or class probability as well as the corresponding bounding box for each defect present in the region of interests proposed by the region proposal network for a given input image of the image dataset”; [0069] “ More information in respect of the detected defect could be extracted, for example area of the defect, defect height or width, overall defect density defined as total area of all defects (belonging to all classes or per-class) divided by the total area of the actively processed mask (e.g. resist or etch mask), defect perimeter, defect diameter, and defect polygon shape);
wherein the machine learning process was trained to (i) detect defects bounded by bounding boxes that have selected aspect ratios, and (ii) ignore defects bounded by bounding boxes that have non-selected aspect ratios (Dey, [0051] “The region proposal module acts directly on the generated feature map. The region proposal module generates anchor boxes of different scales and aspect ratios in each point of the feature map. A bounding box regressor may be trained to accurately predict the bounding box dimensions and offsets of the proposed region of interest from the bundle of anchor boxes associated with each scale factor and with each point in the feature map”; Dey [0053] The detection module then determines the class label or class probability as well as the corresponding bounding box for each defect present in the region of interests proposed by the region proposal network for a given input image of the image dataset. Defect-free images only contain background objects that are not forwarded by the region proposal network”; Dey, [0082] “a detection module adapted to detect defects in each one of the identified regions of interest in the input image and to predict a defect class associated with each one of the detected defects);
While Dey teaches defects distribution with respect to defect classes and pixel distribution (Dey, [0050] “Defects are distributed stochastically on the test structures undergoing SEM inspection, i.e. defect classes, locations and defect features such as area, length, pixel distribution are distributed randomly”) and presents the final prediction to the user (Dey, [0069] “The final prediction may be presented to the user either in text form, e.g. XML file containing defect instances with annotating labels relating to the defect class, bounding box coordinates and a list or array of Boolean variables for each pixel in the bounding box, which indicate whether the corresponding pixel forms part of the instance segmentation mask, or may be presented visually, e.g. an annotated image file”), it fails to explicitly teach:
select, out of the multiple pixels, pixels based on values of at least one of the pixel predictors to provide a plurality of selected pixels;
determine, based on the selected pixels, suspected defect bounding boxes; and
participate in a response to the determining.
In the same field of endeavor, Isken teaches:
select, out of the multiple pixels, pixels based on values of at least one of the pixel predictors to provide a plurality of selected pixels (Isken, [0052] “. A coating defect semantic label may look like “bubble defect(s) identified at pixels {xy coordinates of all pixels in the digital image depicting a bubble defect}” or “cratering defect(s) identified at pixels {xy coordinates of all pixels in the digital image depicting a cratering defect}”. A coating defect semantic label identifies the location of one or more defect types identified in the image at the pixel level and hence allows a rough quantification of the extent of the defect”; Isken, [0054] “ A coating defect instance label identifies individual instances of one or more defect types identified in the image and in addition identifies the pixel positions of these defect instances in the image”; Isken, [0055] Identifying the approximate (bounding boxes) or detailed (pixel-based) location of defect types and/or defect type instances may have the advantage that the positional information can be further-processed easily by the defect-identification program, e.g. for computing the fraction of pixels of an image covered by the defect”);
determine, based on the selected pixels, suspected defect bounding boxes (Isken, [0236] “A measure of a defect can be obtained e.g. by analyzing the dimensions or other properties of the identified defects (e.g. based on surrounding bounding boxes and/or based on individual pixels”); and
participate in a response to the determining (Isken, [0071] “the defects identification program performs the outputting of the computed characterization via the GUI or another output interface of the data processing system”; Isken, [0275] “The data obtained in step 104 may be output to a user and/or may be used internally by the defect-identification program for computing derivative data values, e.g. aggregated coating surface characterizations”; Isken, [0303] The predictive model M1 (or further predictive models M1.2 comprised in the defect-identification program) will also learn correlations between pixel patterns, coating defects/coating surface characterizations and the additional data such as the components and/or amounts of the composition components, manufacturing process parameters and/or application process parameters”; Isken, [0305] “The trained model can be integrated in a defect-identification program 124 and used for automatically identifying coating defects depicted in digital images. The program 124 may comprise additional functionalities, e.g. a GUI 614 for assisting a user in acquiring images during training and/or test phase and/or for displaying the prediction result to a user, e.g. in the form of numerical values and/or segmented images”).
Dey and Isken are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Dey with the method of Isken to make the invention that selects the pixels based on values of at least one of the pixel predictors, determines the suspected defective bounding boxes and responds to the determining; doing so can efficiently identifying and characterizing the defects (Isken, [0004] – [0005]); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 10, Dey in view of Isken teaches the non-transitory computer readable medium according to claim 9, wherein the selected aspect ratios are learnt during a supervised training process (Dey, [0044] The objectness classifier and the bound box regressor associated with each anchor may be configured to distinguish, by supervised learning, foreground objects from the image background and to align and size a bounding box associated with the objects classified as foreground objects”).
Regarding Claim 11, Dey in view of Isken teaches the non-transitory computer readable medium according to claim 10, wherein the selected aspect ratios are learnt during a supervised training process by clustering aspect ratios of tagged defects in a training dataset (Isken, [0065] “Clustering: the method comprises separating an image into similar pixel segments with clustering techniques such as k-means”).
Regarding Claim 15, Dey in view of Isken teaches the non-transitory computer readable medium according to claim 11, wherein the determining comprises applying a non-maximum suppression process on bounding boxes associated with the selected pixels (Dey, [0044] “Non-maximum suppression (NMS) may be applied to reduce the number of proposals and only a predetermined number of top-ranked proposals (by objectness classification score) may be used as inputs to the respective region pooling module”).
Regarding Claim 16, Dey in view of Isken teaches the non-transitory computer readable medium according to claim 9, wherein the one or more feature maps are multiple feature maps associated with different spatial resolutions (Dey, [0042] The region proposal modules are adapted to output a collection of regions of interest, based on the feature map of the respective feature extractor module as input. In other words, the region proposal modules act directly on the feature maps generated by the extractors”. Fig. 2 and Fig. 3 difference resolution).
Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Dey et al. (US 20230343078 A1; hereafter referred to as Dey) in view of Isken et al. (US 20220084181 A1; hereafter referred to as Isken) further in view of Wang et al. (US 20220020175 A1; hereafter referred to as Wang).
Regarding Claim 4, Dey in view of Isken teaches the method according to claim 3, but fails to explicitly teach:
wherein the selected aspect ratios belong to the largest clusters.
In the same field of endeavor, Wang teaches:
wherein the selected aspect ratios belong to the largest clusters (Wang, [0076] Correspondingly, for each first feature point, the candidate bounding box, whose overlapping region with the foreground image region is the largest, of the multiple candidate bounding boxes corresponding to the first feature point may be determined as the object candidate bounding box corresponding to the first feature point. That is, the candidate bounding box with the highest confidence coefficient corresponding to each first feature point is retained”).
Dey, Isken and Wang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Dey in view of Isken with the method of Wang to make the invention that the selects aspect ratios that belong to the largest clusters; doing so can increase the processing speed while it can be ensured that each feature point corresponds to only one refined candidate bounding box (Wang, [0076]); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 12, Dey in view of Isken teaches the non-transitory computer readable medium according to claim 11, but fails to explicitly teach:
wherein the selected aspect ratios belong to the largest clusters.
In the same field of endeavor, Wang teaches:
wherein the selected aspect ratios belong to the largest clusters (Wang, [0076] Correspondingly, for each first feature point, the candidate bounding box, whose overlapping region with the foreground image region is the largest, of the multiple candidate bounding boxes corresponding to the first feature point may be determined as the object candidate bounding box corresponding to the first feature point. That is, the candidate bounding box with the highest confidence coefficient corresponding to each first feature point is retained”).
Dey, Isken and Wang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Dey in view of Isken with the method of Wang to make the invention that selects aspect ratios that belong to the largest clusters; doing so can increase the processing speed while it can be ensured that each feature point corresponds to only one refined candidate bounding box (Wang, [0076]); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Dey et al. (US 20230343078 A1; hereafter referred to as Dey) in view of Isken et al. (US 20220084181 A1; hereafter referred to as Isken) further in view of Sun et al. (See Machine Translation for CN 111695482 A; hereafter referred to as Sun).
Regarding Claim 6, Dey in view of Isken teaches the method according to claim 1, but fails to explicitly teach:
wherein determining comprises setting centers of the suspected defect bounding boxes at centers of the selected pixels.
In the same field of endeavor, Sun teaches:
wherein determining comprises setting centers of the suspected defect bounding boxes at centers of the selected pixels (Sun, Page 5, para 6, “The bounding box information is the deviation of the center position of the defect relative to the grid position and the width and height. The confidence level reflects whether the defect is included and the accuracy of the position in the case of the defect.”).
Dey, Isken and Sun are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Dey in view of Isken with the method of Sun to make the invention that sets centers of the suspected defect bounding boxes at centers of the selected pixels; doing so can efficiently predicts and positions the defects (Sun, page 2, last para); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 14, Dey in view of Isken teaches the non-transitory computer readable medium according to claim 9, but fails to explicitly teach:
wherein determining comprises setting centers of the suspected defect bounding boxes at centers of the selected pixels.
In the same field of endeavor, Sun teaches:
wherein determining comprises setting centers of the suspected defect bounding boxes at centers of the selected pixels (Sun, Page 5, para 6, “The bounding box information is the deviation of the center position of the defect relative to the grid position and the width and height. The confidence level reflects whether the defect is included and the accuracy of the position in the case of the defect.”).
Dey, Isken and Sun are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Dey in view of Isken with the method of Sun to make the invention that sets centers of the suspected defect bounding boxes at centers of the selected pixels; doing so can efficiently predicts and positions the defects (Sun, page 2, last para); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Allowable Subject Matter
Claims 5 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming rejection of claims under 35 U.S.C. 112(b).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20220391641 A1 Defect Detection System
US 20210056708 A1 TARGET DETECTION AND TRAINING FOR TARGET DETECTION NETWORK
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VAISALI RAO. KOPPOLU
Examiner
Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664