DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 9, filed 12/18/2025, with respect to the specification objection have been fully considered and are persuasive. The objection of the specification has been withdrawn.
Applicant’s arguments, see pages 9-13, filed 12/18/2025, with respect to the 101 rejections have been fully considered and are persuasive. The 101 rejection of the claims has been withdrawn.
Applicant’s arguments with respect to claim(s) 1-10 and 18-21 have been considered but are moot because the new ground of rejection does not rely on all references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The remarks state that the applied references do not disclose the features of “wherein the set of medical training images are associated with a target domain; wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images; and performing supervised fine-tuning on the machine-learned medical image analysis model using a set of labeled medical images”. The addition of the Losowska is applied to cure the deficiencies of the previously applied references and will be explained below.
The primary reference discloses on page 3 a model that is trained using images within a target domain of scans of tissue. In addition, the reference discloses fine-tuning the model using labeled images based on page 3 and in the appendix on page 8. In addition, the invention further discloses training on unlabeled patches on page 3 in the second paragraph of the invention. However, these images are not necessarily considered as natural images. The reference of Losowska discloses training using text data that is related to the medical information of the user. This is considered as natural images that can be captured in a real-world scene. This is disclosed in ¶ [28], [29] and [45]. This reference cures the deficiencies of the previously applied references. Therefore, the combination of references perform the above features of the claims.
Thus, based on the above, the features of the claims are disclosed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 7-10 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al (NPL document titled “Semi-Supervised Histology Classification using Deep Multiple Instance Learning and Contrastive Predictive Coding”) in view of Li et al (US Pub 2021/0374553) and Lisowska (US Pub 2021/0241037).
Re claim 1: Lu et al discloses a computing system to perform multi-instance contrastive learning for improved analysis of medical imagery, the operations comprising:
obtaining, by the computing system, a set of medical training images that comprises a plurality of patient-specific image subsets, wherein each patient-specific image subset contains a plurality of different images that depict a same respective patient (e.g. the system obtains patch instances from segmented tissue of a patient, which is considered as a plurality of different images of the same image and/or person’s tissue. This is taught on page 3 section Experiments and Results and page 7 in Appendix A.); and
wherein the set of medical training images are associated with a target domain (e.g. the medical training images are associated with a target domain since these images are associated with the tissue of a patient or a human body, which is taught on page 3.);
for each of the plurality of patient-specific image subsets:
obtaining, by the computing system, a first medical image that depicts a patient and a second, different medical image that depicts the same patient (e.g. the various patches can be considered as different medical images that depict a distinct view of the patient. This is described on page 3 in lines 1-5 and is also introduced in Section 2 titled method in the Contrastive Predictive Coding section.);
processing, by the computing system, the first medical image with a machine- learned medical image analysis model to generate a first embedding for the first medical image (e.g. the invention discloses processing a data sequence with a feature network that encodes an observation considered as a first image, which is taught on page 2 in section 2 under method the Contrastive Predictive Coding (CPC) section. In addition, the Deep Attrition-based MIL contains a CNN that encodes instances.);
wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images (e.g. on page 3, the system discloses pretraining the network on unlabeled instances using CPC);
processing, by the computing system, the second medical image with the machine-learned medical image analysis model to generate a second embedding for the second medical image (e.g. the invention discloses processing a data sequence of a second observation by an encoding into an embedding. This is taught on page 2 in section 2 under method the Contrastive Predictive Coding (CPC) section.); and
modifying, by the computing system, one or more values of one or more parameters of the machine-learned medical image analysis model based at least in part on a loss function that evaluates a difference between the first embedding for the first medical image and the second embedding for the second medical image (e.g. the different patches of the image that are processed by the feature network are then compared to one another as either a positive or negative sample in order to determine a contrastive loss that is used to maximize the common information between context and other observations, which is taught in on page 2 in section 2 under method the Contrastive Predictive Coding (CPC) section.); and
performing supervised fine-tuning on the machine-learned medical image analysis model using a set of labeled medical images (e.g. the fine tuning occurs using a bag including patches of images with labels. This is taught in the MIL Implementation section on page 3 and in Appendix C: MIL Implementation on page 8.).
However, Lu et al fails to specifically teach the features of the computing system comprising one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations.
However, this is well known in the art as evidenced by Li et al. Similar to the primary reference, Li et al discloses contrastive loss (same field of endeavor or reasonably pertinent to the problem).
Li et al discloses the computing system comprising one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations (e.g. the system discloses a memory with instructions that is executed by a processor to perform the features of the invention. This is taught in ¶ [46]-[49].).
Computer Environment
[0046] FIG. 3 is a simplified diagram of a computing device for implementing the noise-robust contrastive learning described in FIGS. 1-2, according to some embodiments. As shown in FIG. 3, computing device 300 includes a processor 310 coupled to memory 320. Operation of computing device 300 is controlled by processor 310. And although computing device 300 is shown with only one processor 310, it is understood that processor 310 may be representative of one or more central processing units, multi-core processors, microprocessors, microcontrollers, digital signal processors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), graphics processing units (GPUs) and/or the like in computing device 300. Computing device 300 may be implemented as a stand-alone subsystem, as a board added to a computing device, and/or as a virtual machine.
[0047] Memory 320 may be used to store software executed by computing device 300 and/or one or more data structures used during operation of computing device 300. Memory 320 may include one or more types of machine readable media. Some common forms of machine readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
[0048] Processor 310 and/or memory 320 may be arranged in any suitable physical arrangement. In some embodiments, processor 310 and/or memory 320 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system-on-chip), and/or the like. In some embodiments, processor 310 and/or memory 320 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 310 and/or memory 320 may be located in one or more data centers and/or cloud computing facilities.
[0049] In some examples, memory 320 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 310) may cause the one or more processors to perform the methods described in further detail herein. For example, as shown, memory 320 includes instructions for a noise-robust contrastive learning module 330 that may be used to implement and/or emulate the systems and models, and/or to implement any of the methods described further herein. In some examples, the noise-robust contrastive learning module 330 may be used to receive and handle the input 340 via a data interface 315. For example, the input 340 may include an image uploaded by a user via a user interface, a dataset of training images received via a communication interface, etc. The noise-robust contrastive learning module 330 may generate an output 350, e.g., such as a class label corresponding to the input image. In some examples, the noise-robust contrastive learning module 330 may also handle the iterative training and/or evaluation of a system or model.
Therefore, in view of Li et al, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of the computing system comprising one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, incorporated in the device of Lu et al, in order to use a processor and memory to implement contrastive learning, which can improve the consistency of the neural network performance (as stated in Li et al ¶ [29]).
However, the combination above fails to specifically teach the features of wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images.
However, this is well known in the art as evidenced by Lisowska. Similar to the primary reference, Lisowska discloses training on labeled and unlabeled data (same field of endeavor or reasonably pertinent to the problem).
Lisowska discloses wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images (e.g. the system discloses training a model using anatomical images of a person in a labeled dataset or using text data in an unlabeled dataset, which is taught in ¶ [28], [29] and [45]. The capturing of text data can be considered as a natural image since text can be captured in a real world scene when scanning a document.).
[0028] As already noted, the training process is performed by the model training circuitry 34 using a combination of labelled datasets 50 and unlabelled datasets 52. The labelled datasets 50 may be obtained in any suitable fashion. In the embodiment of FIG. 3 the labelled datasets 50 are obtained by an expert (for example a radiologist and/or expert in particular anatomical features, conditions or pathologies under consideration) annotating a small subset of the available relevant datasets.
[0029] The labels of the labelled dataset can be of any type suitable for a learning and/or processing task under consideration. For instance if the models are be used for segmentation purposes, the labels may identify which pixels or voxels, or regions of pixels or voxels, correspond to an anatomical feature and/or pathology of interest. Any other suitable labels may be used, for example labels indicating or more properties of subject, for instance a patient, such as presence, absence or severity of a pathology or other condition, age, sex, weight, of conditions, and/or labels indicating one or more properties of an imaging or other procedure performed on the subject. As mentioned further below, embodiments are not limited to using imaging data, and other types of labelled and unlabelled datasets are used, including for example text data.
[0045] Any suitable types of medical imaging data may be used as data sets in the training process or may be the subject of application of the final model following the training. For example, the data sets may comprise one or more of magnetic resonance (MR) data sets, computed tomography (CT) data sets, X-ray data sets, ultrasound data sets, positron emission tomography (PET) data sets, single photon emission computed tomography (SPECT) data sets according to certain embodiments. In some embodiments the data may comprise text data or any other suitable type of data as well as or instead of imaging data. For instance, in some embodiments the data comprises patient record datasets or other medical records.
Therefore, in view of Lisowska, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images, incorporated in the device of Lu, as modified by Li, in order to train a model using labeled and unlabeled data in several iterations, which can improve the accuracy of the model (as stated in Lisowska ¶ [49]).
Re claim 2: The computing system of claim 1, wherein the machine-learned medical image analysis model comprises a machine-learned diagnostic model that is configured to generate one or more medical diagnostic predictions for an input image (e.g. the contrastive prediction task performs the feature of predicting parts of the image, which is taught on page 3 of section 2 under method the Contrastive Predictive Coding (CPC) section.).
Re claim 3: The computing system of claim 1, wherein the first medical image of the patient and the second medical image of the patient were captured from different viewing angles (e.g. the input image data is separated into patches that is considered as capturing the image data specimen at different viewing angles, which is seen in the Appendix A on page 7.).
Re claim 4: The computing system of claim 1, wherein the first medical image of the patient and the second medical image of the patient were captured under different lighting conditions (e.g. the images within figures 1, 2 and 4 show different tissues photographed at different lighting.).
Re claim 5: The computing system of claim 1, wherein the first medical image of the patient and the second medical image of the patient depict different portions of a body of the patient (e.g. the slide images of the body can be from different parts of the breast or separate tissue from different breast in the sample images, which the images used are taught on page 4 in the Conclusions paragraph.).
Re claim 7: The computing system of claim 1, wherein the first medical image of the patient and the second medical image of the patient comprise two different frames of a video that depict a medical procedure (e.g. the images comprise a medical procedure of sampling tissue at different locations that may be associated with cancer or normal, which is taught on page 3, in section 3 Experiments and Results under Dataset.).
Re claim 8: The computing system of claim 1, wherein processing, by the computing system, the first medical image with a machine-learned medical image analysis model comprises augmenting, by the computing system, the first medical image and processing the augmented version of the first medical image with the machine-learned medical image analysis model to generate the first embedding (e.g. the first image can be cut into smaller patches, or cropped, and/or flipped for processing in order to generate a first embedding. The augmentation is explained on page 7 in appendix A and B. The processing of the patch or instance is described on page 3 in the top two paragraphs.).
Re claim 9: The computing system of claim 8, wherein augmenting, by the computing system, the first medical image comprises cropping, by the computing system, the first medical image (e.g. the first image is cropped into patches in order to be further processed, which is taught in appendix A, B on page 7 and on page 3.).
Re claim 10: The computing system of claim 1, wherein the set of medical training images comprise: dermatological images, radiographic images, endoscopic images, ultrasound images, mammographic images, pathology images, posterior eye images, or three- dimensional scan images (e.g. the image reflect breast tissue images, which is taught in the conclusions section on page 5.).
Re claim 18: Lu et al discloses a computing system comprising one or more computing devices, cause the computing system to perform operations, the operations comprising:
obtaining, by the computing system, a set of medical training images that comprises a plurality of attribute-specific image subsets, wherein each attribute-specific image subset contains a plurality of different images that share a common attribute (e.g. the system obtains patch instances from segmented tissue of a patient, which is considered as a plurality of different images of the same image and/or person’s tissue. This is taught on page 3 section Experiments and Results and page 7 in Appendix A.);
wherein the set of medical training images are associated with a target domain (e.g. the medical training images are associated with a target domain since these images are associated with the tissue of a patient or a human body, which is taught on page 3.);
for each of the plurality of attribute-specific image subsets:
obtaining, by the computing system, a first medical image and a second, different medical image that have the common attribute (e.g. the various patches can be considered as different medical images that depict a distinct view of the patient. This is described on page 3 in lines 1-5 and is also introduced in Section 2 titled method in the Contrastive Predictive Coding section.);
processing, by the computing system, the first medical image with a machine- learned medical image analysis model to generate a first embedding for the first medical image (e.g. the invention discloses processing a data sequence with a feature network that encodes an observation considered as a first image, which is taught on page 2 in section 2 under method the Contrastive Predictive Coding (CPC) section. In addition, the Deep Attrition-based MIL contains a CNN that encodes instances.);
processing, by the computing system, the second medical image with the machine-learned medical image analysis model to generate a second embedding for the second medical image (e.g. the invention discloses processing a data sequence of a second observation by an encoding into an embedding. This is taught on page 2 in section 2 under method the Contrastive Predictive Coding (CPC) section.); and
modifying, by the computing system, one or more values of one or more parameters of the machine-learned medical image analysis model based at least in part on a loss function that evaluates a difference between the first embedding for the first medical image and the second embedding for the second medical image (e.g. the different patches of the image that are processed by the feature network are then compared to one another as either a positive or negative sample in order to determine a contrastive loss that is used to maximize the common information between context and other observations, which is taught in on page 2 in section 2 under method the Contrastive Predictive Coding (CPC) section.); and
performing supervised fine-tuning on the machine-learned medical image analysis model using a set of labeled medical images (e.g. the fine tuning occurs using a bag including patches of images with labels. This is taught in the MIL Implementation section on page 3 and in Appendix C: MIL Implementation on page 8.).
However, Lu et al fails to specifically teach the features of one or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system comprising one or more computing devices, cause the computing system to perform operations.
However, this is well known in the art as evidenced by Li et al. Similar to the primary reference, Li et al discloses contrastive loss (same field of endeavor or reasonably pertinent to the problem).
Li et al discloses one or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system comprising one or more computing devices, cause the computing system to perform operations (e.g. the system discloses a memory with instructions that is executed by a processor to perform the features of the invention. This is taught in ¶ [46]-[49].).
Therefore, in view of Li et al, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of one or more non-transitory computer-readable media that collectively store instructions that, when executed by a computing system comprising one or more computing devices, cause the computing system to perform operations, cause the computing system to perform operations, incorporated in the device of Lu et al, in order to use a processor and memory to implement contrastive learning, which can improve the consistency of the neural network performance (as stated in Li et al ¶ [29]).
However, the combination above fails to specifically teach the features of wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images.
However, this is well known in the art as evidenced by Lisowska. Similar to the primary reference, Lisowska discloses training on labeled and unlabeled data (same field of endeavor or reasonably pertinent to the problem).
Lisowska discloses wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images (e.g. the system discloses training a model using anatomical images of a person in a labeled dataset or using text data in an unlabeled dataset, which is taught in ¶ [28], [29] and [45] above. The capturing of text data can be considered as a natural image since text can be captured in a real-world scene when scanning a document.).
Therefore, in view of Lisowska, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the machine-learned medical image analysis model was pretrained with self-supervised pretraining on unlabeled natural images, incorporated in the device of Lu, as modified by Li, in order to train a model using labeled and unlabeled data in several iterations, which can improve the accuracy of the model (as stated in Lisowska ¶ [49]).
Re claim 19: The one or more non-transitory computer-readable media of claim 18, wherein at least one of the attribute-specific image subsets contains a plurality of different images that depict a plurality of different patients diagnosed with a common medical condition (e.g. the data sets can present images that represent different carcinoma or non-carcinoma states of tissue from various sources or patients, which is taught in the Dataset section on page 3.).
Re claim 20: The one or more non-transitory computer-readable media of claim 18, wherein at least one of the attribute-specific image subsets contains a plurality of different images that depict a plurality of body parts of a common patient that exhibit a common medical condition (e.g. image with different carcinoma or non-carcinoma can be shown from different tissues within different chest or breast tissue, which the dataset is explained in the Dataset section on page 3 and the Appendix A and B on page 7.).
Re claim 21: (New) However, Lu fails to specifically teach the features of the computing system of claim 1, wherein the unlabeled natural images comprises images which depict common real world scenes, and wherein training on the set of medical training images and fine-tuning on the set of labeled medical images are at least one of task-specific or dataset specific unlike the self-supervised pretraining on unlabeled natural images.
However, this is well known in the art as evidenced by Lisowska. Similar to the primary reference, Lisowska discloses training on labeled and unlabeled data (same field of endeavor or reasonably pertinent to the problem).
Lisowska discloses wherein the unlabeled natural images comprises images which depict common real world scenes, and wherein training on the set of medical training images and fine-tuning on the set of labeled medical images are at least one of task-specific or dataset specific unlike the self-supervised pretraining on unlabeled natural images (e.g. the invention discloses capturing text data that can be text captured, or scanned, in a real world scene. The text data can be used for unlabeled training, which is taught in ¶ [28], [29] and [45] above. The system uses labeled data that is dataset specific that may reflect an anatomical feature of the scan to train a model, which is different than the text data of the labeled data. The fine tuning occurs using the labeled data, which is taught in ¶ [29] above and [33].).
[0033] Next, the training of the student model 62a is fine-tuned using the labelled datasets 50. The combination of the training using the labelled datasets 50 and the training (e.g. fine tuning) using the unlabelled datasets may be performed in any suitable fashion, for example with the initial training using the unlabelled datasets 52 being followed by fine tuning using the labelled datasets 50, or with the training using labelled datasets 50 and unlabelled datasets 52 being performed simultaneously or in other combined fashion.
Therefore, in view of Lisowska, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the unlabeled natural images comprises images which depict common real world scenes, and wherein training on the set of medical training images and fine-tuning on the set of labeled medical images are at least one of task-specific or dataset specific unlike the self-supervised pretraining on unlabeled natural images, incorporated in the device of train a model using labeled and unlabeled data in several iterations, which can improve the accuracy of the model (as stated in Lisowska ¶ [49]).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al, as modified by Li et al and Lisowska, as applied to claim 1 above, and further in view of Common Knowledge of datasets (Official Notice).
Re claim 6: However, Lu et al fails to specifically teach the features of the computing system of claim 1, wherein the first medical image of the patient and the second medical image of the patient were captured at separate medical treatment visits.
However, this is well known in the art as evidenced by Common Knowledge of datasets (Official Notice). Similar to the primary reference, Common Knowledge of datasets discloses various patient images can be utilized (same field of endeavor or reasonably pertinent to the problem).
Common Knowledge of datasets discloses wherein the first medical image of the patient and the second medical image of the patient were captured at separate medical treatment visits (e.g. there are databases, like TUPAC16, that allows for the speed of proliferation of tumors within images. The detection and storage of a single patient’s tumor speed proliferation can occur in order to aid a doctor in determining better treatment methods for a single individual, or other individuals with a similar tumor growth.).
Therefore, in view of Common Knowledge of datasets, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the first medical image of the patient and the second medical image of the patient were captured at separate medical treatment visits, incorporated in the device of Lu et al, in order to store and process a dataset of a single person’s tumor speed proliferation, which can aid a doctor in treatment methods when approaching a similar tumor growth speed.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tellez et al discloses determining contrastive training.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD S DICKERSON whose telephone number is (571)270-1351. The examiner can normally be reached Monday-Friday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAD DICKERSON/ Primary Examiner, Art Unit 2682