Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Applicant’s amendment filed on January 02, 2026 is acknowledged. Currently claims 1-9, 11-24, and 26-28 are pending. Claims 1, 11, and 18 are amended.
Response to Amendments
Applicant’s remarks and amendments filed January 02, 2026, have been entered. Applicant’s arguments regarding the 35 U.S.C. 112(f) claim interpretation previously set forth in the Non-Final Office Action mailed October 02, 2025, are persuasive. Accordingly, the 35 U.S.C. 112(f) claim interpretation is withdrawn in response.
Applicant’s arguments regarding the 35 U.S.C. 112(a) rejection previously set forth in the Non-Final Office Action mailed October 02, 2025, are persuasive. Accordingly, the 35 U.S.C. 112(a) rejection is withdrawn in response.
Response to Arguments
Applicant’s arguments, filed January 02, 2026, with respect to the rejections under 35 U.S.C. 102(a)(1) have been fully considered. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Sapiro in view of Bullitt.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 18-20, 24, and 26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 18 recites the limitation, “wherein the above steps do not require manual intervention, human created atlases, machine learning, or trained data sets”. The examiner respectfully asserts that this limitation renders the claim indefinite because Applicant fails to explain how each of the disclosed steps of claim 18 can be performed without manual intervention, human created atlases, machine learning, or trained data sets. Further, there is no support in the specifications for how these steps are performed without manual intervention, human created atlases, machine learning, or trained data sets. As previously cited in the Office Action mailed October 02, 2025, the Examiner provided paragraphs [07], [071], and [094], of Applicant’s specifications, which all teach how the disclosed steps of claim 18 are performed with manual intervention, human created atlases, machine learning, or trained data sets but Applicant does not offer the same degree of description for how the steps can be performed without manual intervention, human created atlases, machine learning, or trained data sets. The Examiner respectfully suggests Applicant to amend claim 18 to reflect how the disclosed steps of claim 18 are performed without manual intervention, human created atlases, machine learning, or trained data sets. The Examiner also respectfully suggests Applicant to amend claim 18 to reflect the condition or requirement that delineates when the steps of claim 18 are required to be performed with or without manual intervention, human created atlases, machine learning, or trained data sets.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-6, 8-9, 11-24, and 26-28 are rejected under 35 U.S.C. 103 as being unpatentable of Sapiro et al., US 20210118549 A1, (hereinafter “Sapiro”) in view of Bullitt et al., US 8090164 B2, (hereinafter “Bullitt”).
Regarding claim 1, Sapiro teaches a method for automated analysis of data obtained from any number of biologic materials and any number of visualizations, comprising:
extracting by a processor coupled to a memory using image processing ([0066] “The processor 200 is connected to a communication infrastructure 201, for example, a bus, message queue, network, or multi-core message-passing scheme.”) ([0067] “The electronic device also includes a main memory 202, such as random access memory (RAM), and may also include a secondary memory 203. “), from a visualization of a biologic material, a first shape ([0088] “In some embodiments, the atlas generation component 403 identifies particular features of the depicted brain. The atlas generation component 403 augments the patient-specific data by selecting at least one of the images in the set of brain images, identifying a region of interest in the at least one selected image, and removing all image data except the region of interest from the at least one selected image.” wherein a visualization of a biological material is the set of brain images; a first shape is a feature of the imaged brain);
extracting by the processor ([0066] “The processor 200 is connected to a communication infrastructure 201, for example, a bus, message queue, network, or multi-core message-passing scheme.”), from a second visualization of the same or different biologic material, a second shape ([0083] “Non-rigid transformations may be used to lineup features or landmarks in the image being registered with similar features of other images.” wherein image comparison is non-rigid transformations and a second visualization is other images) ([0083] “In some embodiments, a user directs the electronic device to align at least one feature of the imaged brain with the feature of another imaged brain.” wherein the second shape is the feature of another imaged brain);
performing shape analysis, by the processor ([0066] “The processor 200 is connected to a communication infrastructure 201, for example, a bus, message queue, network, or multi-core message-passing scheme.”), creating structured outputs, unstructured outputs, or structured and unstructured outputs from each extracted shape to characterize the shape and enable comparison with other shapes ([0101] “Common statistical shape models are based on a point distribution model (PDM) which represents a shape by a set of landmark points (i.e., vertices in a mesh) distributed along its surface, and models the shape variation. Particularly, the correspondence among landmark points of training shapes is required in order to capture variations of each shape and build a regression model for shape prediction.” wherein structured outputs and unstructured outputs from each extracted shape are landmark points of each shape);
registering by the processor ([0066] “The processor 200 is connected to a communication infrastructure 201, for example, a bus, message queue, network, or multi-core message-passing scheme.”), the first shape to the second shape ([0083] “In some embodiments, a user directs the electronic device to align at least one feature of the imaged brain with the feature of another imaged brain. The registration may involve a combination of automated and manual processes. For instance, the registration of one image may automatically be modified to line up certain features of the image with those of another image,” wherein the first shape is one feature of the imaged brain and the second shape is the feature of another imaged brain);
iterating by the processor ([0066] “The processor 200 is connected to a communication infrastructure 201, for example, a bus, message queue, network, or multi-core message-passing scheme.”) the extracting and registering steps in any order on the same or other visualizations or unstructured data with additional shapes until all of the desired shapes are extracted and/or registered ([0088] “In some embodiments, the atlas generation component 403 augments the patient-specific data by selecting at least one of the images in the set of brain images, identifying a region of interest in the at least one selected image, and removing all image data except the region of interest from the at least one selected image. For example, the atlas generation component 403 may, after registering an image from the database to the patient image, eliminate all but the STN or relevant shapes from the database image, and combine only the image as so modified with the patient image to produce the patient-specific atlas.” wherein the extracting and registering steps are applied for all STN or relevant shapes from the database image);
building a compendium from the visualizations, shapes, and/or data ([0088] “In some embodiments, the atlas generation component 403 augments the patient-specific data by selecting at least one of the images in the set of brain images, identifying a region of interest in the at least one selected image, and removing all image data except the region of interest from the at least one selected image. For example, the atlas generation component 403 may, after registering an image from the database to the patient image, eliminate all but the STN or relevant shapes from the database image, and combine only the image as so modified with the patient image to produce the patient-specific atlas.” wherein a compendium is the atlas);
identifying by the processor ([0066] “The processor 200 is connected to a communication infrastructure 201, for example, a bus, message queue, network, or multi-core message-passing scheme.”) variations between the outputs or the extracted shapes ([0101] “Particularly, the correspondence among landmark points of training shapes is required in order to capture variations of each shape and build a regression model for shape prediction.” wherein outputs or the extracted shapes are training shapes); and
displaying a target shape visualization containing the extracted shapes and the outputs or the variations associated with the desired shapes ([0089] “Referring again to FIG. 1, the method 100 also includes displaying, by the electronic device, the patient-specific atlas (103).”) ([0088] “In some embodiments, the atlas generation component 403 augments the patient-specific data by selecting at least one of the images in the set of brain images, identifying a region of interest in the at least one selected image, and removing all image data except the region of interest from the at least one selected image. For example, the atlas generation component 403 may, after registering an image from the database to the patient image, eliminate all but the STN or relevant shapes from the database image, and combine only the image as so modified with the patient image to produce the patient-specific atlas.” wherein the target shape visualization is the atlas) as a three dimensional representation ([0046] “Visualization of these regions in three dimensions will provide the surgical DBS targeting and post-surgery programming more reliable and effective”).
Sapiro does not specifically disclose using either rigid or deformable registration methods or a combination of both registration methods.
However, Bullitt teaches using either rigid or deformable registration methods or a combination of both registration methods ([Col.8, lines 52-60] “For example, the mapping method used to map vessels to each other may be a rigid mapping method, whereby an object is mapped to another object without changing the shape of either object, an affine mapping method, whereby the outer boundaries of one object are scaled and mapped to those of another object, or a fully deformable mapping method, whereby internal features of an object are scaled and mapped to internal features of another object.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use rigid or deformable registration of Bullitt in the automated biologic material analysis method of Sapiro because rigid registration allows for enhanced accuracy while deformable registration enables automatic adjustment without manual intervention. Both of these registration techniques improve the overall efficiency of shape registration for the compendium or atlas.
Regarding claim 2, Sapiro in view of Bullitt discloses the method of claim 1, wherein the iterating registering of all of the desired shapes and visualizations comprises:
aligning a previously registered visualization and/or shape and a shape and/or visualization to be registered based on a first optimization function (Sapiro - [0083, lines 31-32], [0104, lines 3-7], [0106, lines 1-4]); and
deforming at least one of the previously registered visualization and/or shape and each shape and/or visualization to be registered based on a second optimization function (Sapiro - [0083, lines 19-26], [0101, lines 19-25]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 3, Sapiro in view of Bullitt discloses the method of claim 1, further comprising, prior to the extracting operation, processing input data associated with the visualization by:
rotating (Sapiro - [0083, lines 10-11]) the visualization to a standard orientation;
homogenizing (Sapiro - [0086, lines 10-32]) an intensity across the image; and/or
eliminating artifacts (Sapiro - [0088, lines 9-12]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 4, Sapiro in view of Bullitt teaches the method of claim 1, further comprising validating the registration by comparing an extracted feature from the visualization to a further extracted feature of a further visualization (Sapiro - [0138, lines 1-8]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 5, Sapiro in view of Bullitt teaches the method of claim 1, wherein the identifying operation comprises: identifying local changes within the first shape (Sapiro - [0088], lines 52-58) and the second shape (Sapiro - [0022, lines 1-6]); and evaluating the registering using a similarity metric (mutual information similarity metric) (Sapiro - [0099, lines 4-12]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 6, Sapiro in view of Bullitt discloses the method of claim 1,
wherein the unstructured outputs associated with each shape is displayed in a visualization representation of the target shape in layers selectively displayable by a user (Sapiro - [0085]);
wherein the unstructured outputs associated with each shape comprises name, function, and connection identifications (Sapiro - [0081], component 402); and
wherein the variations between the first shapes and the second shapes are displayed in the target visualization and are identified as abnormal based on a model (Bullitt - [0045] “Measured blood vessel attributes for the individual subject are the compared to statistical measures in the atlas for the corresponding anatomical region. The results of the comparison may indicate how the individual's blood vessel attribute measurements compare to those of the population. For instance, one comparison of interest for a particular attribute may indicate the number of standard deviations between the measurement for the individual subject and the mean value of the attribute measurement for the population. In step 212, based on the comparison, output module 110 may output data indicative of a physical characteristic of the subject. For example, the output may indicate the location of a vessel abnormality in the subject.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Sapiro to identify the variations as abnormalities based on a model, such as statistical measures in the atlas, suggested by Bullitt. The motivation/suggestion would be to have any variations identified in a target shape compared to a generic shape, to be defined as abnormalities. Therefore, it would have been obvious to combine Bullitt with Sapiro to obtain the method specified in claim 6.
Regarding claim 8, Sapiro in view of Bullitt teaches the method of claim 1, wherein:
at least some of the shapes are volumetric objects (Sapiro - Abstract, [0132]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 9, Sapiro in view of Bullitt teaches the method of claim 1, wherein:
at least one of the shapes is received from an atlas (Sapiro - [0088 lines 1-12]); and
the visualization is obtained by at least one of Magnetic Resonance Imaging, Computerized Tomography scan, and a radiologic scan (Sapiro - [0073, lines 13-18]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 11, the claim recites similar limitations to claim 1 but in the form of a system. Therefore, claim 11 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 12, the claim recites similar limitations to claim 2 but in the form of a system. Therefore, claim 12 recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above).
Regarding claim 13, the claim recites similar limitations to claim 3 but in the form of a system. Therefore, claim 13 recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above).
Regarding claim 14, the claim recites similar limitations to claim 4 but in the form of a system. Therefore, claim 14 recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above).
Regarding claim 15, the claim recites similar limitations to claim 5 but in the form of a system. Therefore, claim 15 recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above).
Regarding claim 16, the claim recites similar limitations to claim 6 but in the form of a system. Therefore, claim 16 recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above).
Regarding claim 17, Sapiro in view of Bullitt teaches the system of claim 11, further comprising:
an atlas adapted to provide the generic shape (Sapiro - [0088 lines 1-12]); and
a database for storing the visualization of the target shape (Sapiro - [0077]), the visualization being obtained by one of Magnetic Resonance Imaging, Computerized Tomography scan, and a radiologic scan (Sapiro - [0073, lines 13-18]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 18, the claim recites similar limitations to claim 1 (see the analysis for claim 1 above) but in the form of a non-transitory computer-readable medium ([0067]). Claim 18 further recites the following limitation, “wherein the above steps do not require manual intervention, human created atlases, machine learning, or trained data sets”.
Sapiro does not specifically disclose wherein the above steps do not require manual intervention, human created atlases, machine learning, or trained data sets.
However, Bullitt teaches wherein the above steps do not require manual intervention, human created atlases, machine learning, or trained data sets ([Col. 2, lines 11-13] “Methods, systems, and computer programs products are disclosed for analyzing blood vessel attributes for diagnosis, disease staging, and surgical planning.”) ([Col.7, lines 29-35] “Segmentation methods suitable for use with embodiments of the present invention include automated methods, semi-automated methods, and manual methods. Automated methods are ideal because the methods do not vary by user. In an automated method, blood vessel image data is input into a computer and the computer identifies blood vessels in an image.” wherein the methods can be fully automated and do not vary by user as they are entirely operated by a computer program).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine execute the methods of Sapiro with the computer program of Bullitt so that the methods are automated and do not vary by user.
Regarding claim 19, Sapiro in view of Bullitt discloses the non-transitory computer-readable medium of claim 18, wherein claim 19 recites similar limitations to claims 2-3 and is therefore rejected for similar rationale and reasoning (see the analysis for claims 2-3 above).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 18.
Regarding claim 20, Sapiro in view of Bullitt discloses the non-transitory computer-readable medium of claim 18, wherein claim 20 recites similar limitations to claims 6 and is therefore rejected for similar rationale and reasoning (see the analysis for claims 6 above).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 18.
Regarding claim 21, Sapiro in view of Bullitt teaches the method of claim 1, wherein the step of identifying identifies between corresponding outputs, corresponding shapes, or corresponding outputs and corresponding shapes of the first shapes and the extracted shapes (Sapiro - [0101 lines 11-17] wherein corresponding outputs from each extracted shape are landmark points of each shape; wherein the step of identifying is identifying variations) (Sapiro - [0088 lines 52-58] wherein corresponding shapes of the first shapes are features of the imaged brain).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 22, Sapiro in view of Bullitt teaches the method of claim 1, further including a step of performing shape comparisons to compare the shape-analyzed data to generated objects and/or graphs and consensus objects and/or graphs (Sapiro - [0088 lines 52-63] wherein the shape-analyzed data is the patient image and the generated objects and/or graphs are the database images).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 23, Sapiro in view of Bullitt teaches the method of claim 1, further including a step of performing shape comparisons to compare the shape-analyzed data to other shape-analyzed data (Sapiro - [0083 lines 53-55]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 24, Sapiro in view of Bullitt discloses the method of claim 1, wherein the extracting step can be performed without manual intervention and is not based on any data that was trained during machine learning or annotated by humans (Sapiro - [0083 lines 40-43]) (Bullitt - [Col.7, lines 29-35] “Segmentation methods suitable for use with embodiments of the present invention include automated methods, semi-automated methods, and manual methods. Automated methods are ideal because the methods do not vary by user. In an automated method, blood vessel image data is input into a computer and the computer identifies blood vessels in an image.” wherein the methods can be fully automated and do not vary by user as they are entirely operated by a computer program).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 26, Sapiro in view of Bullitt discloses the method of claim 1, wherein the iterating step can be performed without manual intervention and is not based on any data that was trained during machine learning or annotated by humans (Sapiro - [0088] “In some embodiments, the atlas generation component 403 augments the patient-specific data by selecting at least one of the images in the set of brain images, identifying a region of interest in the at least one selected image, and removing all image data except the region of interest from the at least one selected image. For example, the atlas generation component 403 may, after registering an image from the database to the patient image, eliminate all but the STN or relevant shapes from the database image, and combine only the image as so modified with the patient image to produce the patient-specific atlas.”) (Bullitt - [Col.7, lines 29-35] “Segmentation methods suitable for use with embodiments of the present invention include automated methods, semi-automated methods, and manual methods. Automated methods are ideal because the methods do not vary by user. In an automated method, blood vessel image data is input into a computer and the computer identifies blood vessels in an image.” wherein the methods can be fully automated and do not vary by user as they are entirely operated by a computer program).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 27, Sapiro in view of Bullitt teaches the method of claim 1, further including a step of aggregating many images, along with corresponding structured or unstructured data, of that structure to create a reference atlas for that structure (Sapiro - [0101 lines 11-17]) (Sapiro - [0088 lines 52-58]) (Sapiro - [0088 lines 1-12]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Regarding claim 28, Sapiro in view of Bullitt teaches the method of claim 1, the iterating step extracts and registers unstructured data to structured data (Sapiro - [0088 lines 1-12]) (Sapiro - [0083 lines 53-55]).
The motivation for combining Sapiro and Bullitt is the same motivation as used for claim 1.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sapiro et al., US 20210118549 A1, (hereinafter “Sapiro”) in view of Bullitt et al., US 8090164 B2, (hereinafter “Bullitt”) in further view of Zhang et al., "Probabilistic graphlet cut: Exploiting spatial structure cue for weakly supervised image segmentation" 2013, (hereinafter “Zhang”).
Regarding claim 7, Sapiro in view of Bullitt teaches the method of claim 1, disclosing at least some of the shapes (Sapiro - [0017, lines 1-5]) (Sapiro - [0083, lines 38-47]).
Sapiro in view of Bullitt does not specifically disclose graphlets.
However, Zhang teaches graphlets ([3.1 An overview] “As shown in Figure 2, the proposed approach learns the distribution of graphlets [23] and then facilitates image segmentation based on the learned graphlet distribution. We first extract graphlets from each image, which capture the spatial structure of the superpixels.” wherein the extraction and segmentation processes are repeated for every image).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sapiro in view of Bullitt to incorporate first graphlets, the first graphlets comprising first nodes and first segments; and second graphlets, the second graphlets comprising second nodes and second segments, as taught by Zhang. The motivation would be to provide a volumetric segmentation method better capable of improved image segmentation and spatial layout analysis for the first and second shapes. Therefore, it would have been obvious to combine Zhang with Sapiro in view of Bullitt to obtain the method specified in claim 7.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA PEARSON whose telephone number is (703)-756-5786. The examiner can normally be reached Monday - Friday 8:00 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)- 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMANDA H PEARSON/Examiner, Art Unit 2666
/MING Y HON/Primary Examiner, Art Unit 2666