DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed August 25th 2025 has been entered. Claims 1-15 are pending in the application. Applicant’s amendments to the Claims 1-15 have overcome the rejections previously set forth in the Non-Final Office Action mailed March 24th 2025.
Response to Arguments
Applicant’s arguments, see Pg. 1-7, filed August 25th 2025, with respect to the rejections of claims 1-15 under §102 and §103 have been fully considered and are persuasive. Therefore, the §102 and §103 rejections have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Bhargava (US 20140270457 A1), Batenchuk (US 20200123618 A1), Kincaid (US 20100128988 A1), and Gaiser (NPL: Automated analysis of protein expression and gene amplification within the same cells of paraffin-embedded tumour tissue).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 15 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bhargava (US 20140270457 A1).
Bhargava teaches:
A method of generating an inference-based virtually stained image, comprising:
providing a neural network (Bhargava: feed-forward neural network [0049]) image processing software running on one or more processors of a computing device (Bhargava: The computer-executable instructions can be part of, for example, a dedicated software application […] Such software can be executed, for example, on a single local computer [0049]), wherein the one or more neural networks are trained with a plurality of images of chemical stains of one or more first endogenous signals (Bhargava: A principal component analysis (PCA) was used to reduce the dimensionality of the input spectra and train a feed-forward neural network using the PCA-projected spectra as input [0119]; see Note 1A);
obtaining (Bhargava: obtaining a spectroscopic image of the sample, [0007]) and producing an image of the biological sample (Bhargava: multiple stained images can be generated from the sample [0016]);
detecting, using the neural network, the one or more virtual staining patterns in the image of the biological sample using the trained neural network (Bhargava: The spectra in the spectroscopic image (entire or reduced) are related to known staining patterns in the sample. Spectral features that allow prediction of the staining patterns are found using statistical pattern recognition approaches, such as […] artificial neural network [0044]);
parsing, using the neural network, one or more second the endogenous signals of the one or more detected virtual staining patterns (Bhargava: train[ing] a feed-forward neural network using the PCA-projected spectra as input and the associated bright-field color data as output, [0119]; see Note 1B and Note 15A); and
outputting two separate virtual images of each corresponding endogenous signal, wherein the two separate virtual images (see Note 15B) are configured to be combined one or more times to selectively produce one or more new multiplexed virtual images (Bhargava: Using a combination of multiple staining results, it is possible to subsequently deduce the cell types and/or molecular transformations present. […] the method allows numerous (perhaps limitless) computed stains to be obtained from a single infrared spectroscopic image for the same sample, [0143]; see Note 15C).
Note 15A: When the feed-forward neural network is trained on the PCA-projected spectra, the network is parsing the PCA-projected spectra.
Note 15B: Bhargava teaches: “an output computed stain image from the reduced spectroscopic image (e.g., reduced IR spectra) is generated, thereby imaging the sample,” [0007]. That is, a sample may be matched with spectra (analogous to endogenous signals, as shown in Note 1A above) to generate a corresponding virtual stain image.
“FIGS. 3A-3G are digital images demonstrating that the same sample can be ‘stained’ with many different computational ‘stains’,” [0016], and that “In some examples, the method includes imaging a single sample multiple times, for example to obtain or generate a plurality of images using the method. […] the current methods permit one to perform multiple analysis on a single sample, such as generating at least 2, […] or more different output computed stain images.” [0060]. That is, for each sample corresponding to spectra, two separate virtual images are configured to be combined to produce a multiplexed image.
Note 15C: The Examiner notes that Bhargava titles the section of paragraphs [0143-0145] “Multiplexing Computed Stain Images”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 3, 4, 11, 12, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bhargava (US 20140270457 A1) in view of Batenchuk (US 20200123618 A1).
Regarding claim 1:
Bhargava teaches:
A method for generating inference-based virtually stained image annotations, comprising:
providing one or more neural networks (Bhargava: feed-forward neural network [00119]) executed by image processing software running on one or more processors of a computing device (Bhargava: The computer-executable instructions can be part of, for example, a dedicated software application […] Such software can be executed, for example, on a single local computer [0049]), wherein the one or more neural networks are trained with a plurality of images of chemical stains (Bhargava: Thus, a plurality of samples stained with a target stain (such as H&E, Bismark Brown, Nile Blue or antibody specific for a target protein […] can be used to train a network [0045]) of one or more first endogenous signals (Bhargava: A principal component analysis (PCA) was used to reduce the dimensionality of the input spectra and train a feed-forward neural network using the PCA-projected spectra as input [0119]; see Note 1A and Note 1B);
obtaining data corresponding to a biological sample (Bhargava: in process block 110, a spectroscopic image (e.g., IR absorbance data) of an unstained sample (such as one containing a tumor or portion thereof) are acquired, [0052]);
obtaining (Bhargava: obtaining a spectroscopic image of the sample, [0007]) and producing an image of the biological sample (Bhargava: multiple stained images can be generated from the sample [0016]) including one or more second endogenous signals (see Note 1B) identified by annotating techniques;
detecting one or more virtual staining patterns in the image of the biological sample using the one or more neural networks (Bhargava: The spectra in the spectroscopic image (entire or reduced) are related to known staining patterns in the sample. Spectral features that allow prediction of the staining patterns are found using statistical pattern recognition approaches, such as […] artificial neural network [0044]); and
overlaying the virtual staining patterns detected in the image of the biological sample (Bhargava: multiple stained images can be generated from the sample for different regions and at different scales. Images can be overlaid, merged or multiply highlighted, [0016]) using spatial matching techniques (Bhargava: Since the same tissue was imaged in both IR and bright-field, an affine transformation was then applied to bring all pixels into alignment, [0123]) to create the inference-based virtually stained image annotations.
Note 1A: As best understood by the examiner, the “spectra” taught by Bhargava are analogous to the “endogenous signals” of the present application. The specification of the present application recites: “the disclosure utilizes patterns from endogenous signals to digitally generate the tissue staining patterns,” [0017]. Similarly, Bhargava teaches: “The spectra in the spectroscopic image (entire or reduced) are related to known staining patterns in the sample. […] Spectral and spatial features can be used to provide a color coded image that resembles a stained image and its concordance with the control can be verified. An output computed stain image (such as one that is comparable to (e.g., provides the same information) one that would have been obtained if a stain interest, such as H&E stain or BRCA1 protein staining, was used) is generated from the entire or the reduced spectroscopic image.” [0044]. That is, the patterns related to the spectra are used to digitally generate an output stained image.
Note 1B: The ”first endogenous signals” are understood to be endogenous signals or spectra utilized for training. Bhargava teaches: “train[ing] a feed-forward neural network using the PCA-projected spectra as input and the associated bright-field color data as output,” [0119].
The “second endogenous signals” are understood to be endogenous signals or spectra utilized for generating output. Bhargava teaches: “an output computed stain image from the reduced spectroscopic image (e.g., reduced IR spectra) is generated,” [0007].
Bhargava fails to teach:
obtaining and producing an image of the biological sample including one or more second endogenous signals identified by annotating techniques;
Batenchuk teaches:
obtaining and producing an image of the biological sample (Batenchuk: image processing system 120 processes an image to generate one or more feature maps. A feature map can be data structure that includes an indication of the one or more biological features assigned to each patch of the image [0049]) including one or more second endogenous signals (Batenchuk: image processing system to detect biological features to infer PD-L1 status [0022]; see Note 1C) identified by annotating techniques (Batenchuk: An identification of the one or more biological features depicted by a patch may be assigned (e.g., through a label, a data structure, annotation, metadata, image parameter and/or the like) to the patch. [0048]);
Note 1C: Batenchuk teaches: “PD-L1 protein expression can represent or indicate a level of expression of corresponding PD-1 receptor. PD-L1, upon binding to the PD-1 receptor, may cause a transmission of an inhibitory signal that reduces antigen-specific T-cells and apoptosis in regulatory T-cells.” [0053]. That is, inferring PD-L1 status as cited above from [0022] may include detecting an endogenous signal (such as the inhibitory signal in [0053]), and therefore, the biological features taught by Batenchuk are analogous to the endogenous signals of the present application.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Batenchuk with Bhargava. Obtaining and producing an image of the biological sample including one or more second endogenous signals identified by annotating techniques, as in Batenchuk, would benefit the Bhargava teachings by ensuring biological features of the sample are clearly identified.
Regarding claim 2:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above),
Bhargava fails to explicitly teach:
wherein the annotations comprise features used by the one or more neural networks to perform semantic segmentation.
Batenchuk teaches:
wherein the annotations comprise features used by the one or more neural networks to perform semantic segmentation (Batenchuk: In some instances, segmentation can be performed, such that individual pixels and/or detected figures can be associated with a particular structure such as, but not limited to, the biological sample [0034]; see Note 2A).
Note 2A: Batenchuk teaches: “Feature detector may use segmentation, convolutional neural network, object recognition, pattern analysis, machine-learning, a pathologist, and/or the like to detect the one or more biological features,” [0048]. Batenchuk uses “and/or”, indicating that both a convolutional neural network and segmentation may be used to detect biological features. Therefore, it would be obvious to one of ordinary skill in the art to perform semantic segmentation with the one or more neural networks using features.
Regarding claim 3:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above), wherein the obtaining and producing of the image of the biological sample comprises incorporates sequencing or imaging mass spectroscopy (Bhargava: The disclosed methods can include obtaining a spectroscopic image (e.g., infrared (IR) imaging data) of the sample, Abstract).
Regarding claim 4:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above), wherein the obtaining and producing of the image of the biological sample (Batenchuk: Image collection system 104 can be configured such that each portion of a sample (e.g., slide) is manually loaded onto a stage prior to imaging and/or such that a set of portions of one or more samples (e.g., a set of slides) are automatically and sequentially loaded onto a stage [0029]) comprises incorporates an immunohistochemistry or immunofluorescence technique (Batenchuk: The image may be of an immuno-histochemistry (IHC) slide, [0029]).
Regarding claim 11:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above), further comprising multiplexing the overlayed virtual staining patterns (Bhargava: Multiplexing Computed Stain Images, [0142]) with one or both of existing virtual stains (Bhargava: Using a combination of multiple staining results, it is possible to subsequently deduce the cell types and/or molecular transformations present, [0143]) or conventional assay readouts.
Regarding claim 12:
Bhargava teaches:
A method of generating virtually-stained image annotations, comprising:
obtaining an image of a biological sample (Bhargava: obtaining a spectroscopic image of the sample, [0007]);
generating, based on the image, a virtually-stained image of the biological sample using a machine learning algorithm (Bhargava: information is applied to a network, such as a neural network, […] to produce an algorithm and parameters for the network needed to generate an output computed stain image for the test sample [0074]) executed via a computer program running on a processor (Bhargava: Any of the methods described herein can be implemented by computer-executable instructions [...] Such instructions can cause a computer to perform the method. The technologies described herein can be implemented in a variety of programming languages [0109]), wherein the machine learning algorithm is configured to detect virtual staining patterns of endogenous signals in the biological sample (Bhargava: The spectra in the spectroscopic image (entire or reduced) are related to known staining patterns in the sample. Spectral features that allow prediction of the staining patterns are found using statistical pattern recognition approaches, such as […] artificial neural network [0044]);
Bhargava fails to teach:
annotating the virtually-stained image of the biological sample with annotations for one or more biomarkers;
parsing the annotations of the virtually-stained image of the biological sample; and
overlaying the virtually-stained image of the biological sample with the parsed annotations.
Batenchuk teaches:
annotating the virtually-stained image of the biological sample with annotations for one or more biomarkers (Batenchuk: Feature detector 144 may use the original image and/or just the processed image to detect one or more biological features that are shown by the image [0048]);
parsing the annotations of the virtually-stained image of the biological sample (Batenchuk: Each patch can be analyzed by the feature detector to determine the one or more biological features shown in the patch [0048]); and
overlaying the virtually-stained image of the biological sample with the parsed annotations (Batenchuk: image annotation device 116 can transform detected annotated biological features as input annotation data (e.g., that indicate which pixel(s) of the image correspond to particular annotation characteristics). For example, image annotation device 116 associates a patient's smoking history with particular pixels in the image that depict possible smoking damage or staining [0032]).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Batenchuk with Bhargava. Annotating the virtually-stained image of the biological sample with annotations for one or more biomarkers; parsing the annotations of the virtually-stained image of the biological sample; and overlaying the virtually-stained image of the biological sample with the parsed annotations, as in Batenchuk, would benefit the Bhargava teachings by visualizing the annotations for an end-user to clearly identify the features of the biological sample.
Regarding claim 13:
Bhargava in view of Batenchuk teaches:
The method of claim 12 (as shown above),
Bhargava fails to explicitly teach:
further comprising semantically segmenting the virtually-stained image of the biological sample.
Batenchuk teaches:
further comprising semantically segmenting the virtually-stained image of the biological sample (Batenchuk: In some instances, segmentation can be performed, such that individual pixels and/or detected figures can be associated with a particular structure such as, but not limited to, the biological sample [0034]; see Note 2A above).
Regarding claim 14:
Bhargava in view of Batenchuk teaches:
The method of claim 12 (as shown above), further comprising training the machine learning algorithm with a plurality of virtual staining patterns of endogenous signals (Bhargava: A principal component analysis (PCA) was used to reduce the dimensionality of the input spectra and train a feed-forward neural network using the PCA-projected spectra as input [0119]; see Note 1A above);
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Bhargava (US 20140270457 A1) in view of Batenchuk (US 20200123618 A1) and Gaiser (NPL: Automated analysis of protein expression and gene amplification within the same cells of paraffin-embedded tumour tissue).
Bhargava in view of Batenchuk teaches:
The method of claim 4 (as shown above),
Bhargava in view of Batenchuk fails to teach:
wherein the immunohistochemistry or immunofluorescence technique comprises directly comparing virtual staining patterns for two or more antibody clones on a tissue section.
Gaiser teaches:
wherein the immunohistochemistry or immunofluorescence technique comprises directly comparing virtual (see Note 5B) staining patterns for two or more antibody clones on a tissue section (see Note 5A).
Note 5A: Gaiser teaches: “For IHC CD133 detection an anti-CD133 rabbit monoclonal antibody (1:20; clone C24B9, Cel} Signaling, Danvers, MA, USA) was used,” (Pg. 2, Section 2.2: Algorithm for combined IHC and FISH, par. 1). Gaiser further teaches: “Slides were washed in 1x PB8 and thereafter incubated for I h at room temperature (RT) with the secondary Goal Anti-Rabbit IgG-FITC antibody (1:200; clone 4030-02, SouthernBiotech, Birmingham, AL, USA).” (Pg. 2, Section 2.2: Algorithm for combined IHC and FISH, par. 1). Gaiser further teaches: “Performing fluorescence immunophenotyping and FISH on the same tumour tissue slide enables the comparison of protein expression and gene copy numbers detected within the same tumour cells.” (Pg. 3, Section 4: Discussion, par. 1). That is, using two antibody clones, Gaiser was enabled to compare protein expression and gene copy numbers within the same cells. To the best of the examiner’s knowledge, this method is analogous to comparing staining patterns for two or more antibody clones on a tissue section.
Note 5B: Gaiser teaches that an “we developed a protocol for subsequent protein and gene copy number detection and analysis on the same slide using an automated image acquisition and analysis software including image relocation and a function to mark and count in areas of interest (Fig. 1).” (Pg. 2, par. 1). Gaiser further showcases a “Screen capture of image analysis” in Figure 3. It is reasonable to conclude that while Gaiser does not virtually stain images, Gaiser does compare “virtual” staining patterns (i.e., the patterns have been digitized before comparison).
Furthermore, while Gaiser does not explicitly teach virtually staining the tissue sample, Bhargava teaches that a computer may generate a stained image without having to stain the tissue manually: “FIGS. 1C-1E show how the disclosed methods can be used to obtain an image (FIG. 1E) that looks similar to the H&E image (FIG. 1B), without staining the tissue.” [0042]. Therefore, it is reasonable to conclude that it would be obvious to one of ordinary skill in the art to simulate the method of Gaiser using a virtual staining system on a computer.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Gaiser with Bhargava in view of Batenchuk. Comparing staining patterns for two or more antibody clones on a tissue section, as in Gaiser, would benefit the Bhargava in view of Batenchuk teachings by enabling the user to quickly identify patterns they are looking for without staining the physical tissue sample.
Claims 7, 8, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Bhargava (US 20140270457 A1) in view of Batenchuk (US 20200123618 A1) and Kincaid (US 20100128988 A1).
Regarding claim 7:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above),
Bhargava in view of Batenchuk fails to explicitly teach:
wherein an opacity level of the virtual staining patterns may be adjusted to enable focusing on specific antibody clones.
Kincaid teaches:
wherein an opacity level of the virtual staining patterns may be adjusted (Kincaid: In at least one embodiment, the adjusting comprises varying the transparency/opacity of the virtual staining [0022]) to enable focusing on specific antibody clones (see Note 7A).
Note 7A: Allowing the user to adjust the opacity level of each virtual staining pattern would inherently enable the user to better focus on the elements of the virtual staining patterns.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kincaid with Bhargava in view of Batenchuk. Having an opacity level of the virtual staining patterns be adjusted to enable focusing on specific antibody clones, as in Kincaid, would benefit the Bhargava in view of Batenchuk teachings by enabling the user to quickly identify patterns they are looking for and save time that would be otherwise spent searching for them.
Regarding claim 8:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above),
Bhargava in view of Batenchuk fails to teach:
wherein the virtual staining patterns render pattern differences that allow for quantitative analysis.
Kincaid teaches:
wherein the virtual staining patterns render pattern differences that allow for quantitative analysis (Kincaid: Different combinations of selecting and deselecting can be performed until the user positively identifies the individual attributes that are being represented by a combination color. [0076]; Kincaid: each of the attributes can be quantitated and these quantitated, extracted values (e.g., numbers) 52 can then be displayed in a table 50 that can be displayed on the display 10 of user interface 100 [0068], see Note 8A).
Note 8A: Kincaid teaches: “in FIG. 4, all three attributes 54 have been selected 56 for display on all cells 14 (all cells having been selected by checking attribute box feature 56 for that column) from which the attributes were measured and/or calculated. In image 12 display 10 shows cells 14 characterized by only one of the attributes as shown by a color of red, green or blue. However, when more than one attribute is present in a cell, the colors become mixed,” [0076]. That is, the biological features in the virtual stain may be color coded, enabling a viewer to visualize the differences between various patterns and features.
Kincaid further teaches in [0068] that attributes may be “quantitated” and displayed to a user interface. Kincaid further teaches that: “Attributes can include, but are not limited to, size (e.g., cell size or size of a particular type of sub-cellular component), shape (e.g., cell shape or shape of a sub-cellular component), number of nuclei, etc.” While Kincaid lists biological features of the cell, it would be obvious to one of ordinary skill in the art to expand the attributes to include, for example, the biological features taught by Batenchuk, because Batenchuk teaches: “For each portion of the one or more portions of the microscopic image, a biological feature is identified. The biological feature is of a set of possible biological features that is represented by the portion,” [0007]. Therefore, by “quantitating” the biological features and displaying them to a user, Kincaid enables quantitative analysis.
Quantitating and color coding the biological features, as in Kincaid, is analogous to rendering pattern differences that allow for quantitative analysis, as the user will be able to quickly see what cells have which attributes and how many have a certain attribute in a given area.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kincaid with Bhargava in view of Batenchuk. Having the virtual staining patterns render pattern differences that allow for quantitative analysis, as in Kincaid, would benefit the Bhargava in view of Batenchuk teachings by enabling the user to quickly identify patterns they are looking for and save time that would be otherwise spent searching for them.
Regarding claim 9:
Bhargava in view of Batenchuk teaches:
The method of claim 1 (as shown above),
Bhargava in view of Batenchuk fails to teach:
further comprising individually manipulating a first virtual staining pattern of the overlayed virtual staining patterns.
Kincaid teaches:
further comprising individually manipulating (Kincaid: In at least one embodiment, the user interface includes a user selectable feature for each of the attributes displayed in a table, wherein selection of a feature for an attribute causes the user interface to virtually stain the image with virtual staining for the selected attribute, [0030], Kincaid: selecting, by a user, one of the cells in the image; and selecting, by a user, a feature for finding all other cells in the image having similar attributes, wherein upon the selecting a feature [0019]; see Note 9A) a first virtual staining pattern of the overlayed virtual staining patterns (Kincaid: staining representing classes of cells or sub-cellular components may replace or overlay any existing virtual stain being viewed, [0086]).
Note 9A: Kincaid teaches that individual features of a cell can be selected by the user. It is reasonable to conclude that the selecting of biological features taught by Kincaid is analogous to “individual manipulation” of the stains of the present application. Therefore, when combined with the teachings of Bhargava in view of Batenchuk, it would be obvious to allow the user to individually manipulate a virtual staining pattern of the overlaid virtual staining pattern.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kincaid with Bhargava in view of Batenchuk. Individually manipulating a virtual staining pattern of the overlayed virtual staining patterns, as in Kincaid, would benefit the Bhargava in view of Batenchuk teachings by enabling the user to fine tune the annotations and patterns to best fit how they would analyze the patterns of the biological sample (Kincaid: The user is provided flexibility to combine attributes and thus display any combination of attributes desired on the image 12 [0076]).
Regarding claim 10:
Bhargava in view of Batenchuk and Kincaid teaches:
The method of claim 9 (as shown above), wherein the manipulating includes adjusting an intensity of each virtual staining pattern (Kincaid: For each attribute the value can be encoded by varying the intensity of the attribute's color in proportion to the value. This color gradient provides a visual representation of the attribute values so that variations in attribute values can be visually perceived across the population of cells/sub-cellular components [0074]) in real time (Kincaid: the image is an image produced by an instrument in real time [0015]).
Allowable Subject Matter
Claim 6 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 6 recites: “creating a unique virtual multiclonal antibody stain from multiple monoclonal clones, and titering to a desired expression level.”
Bhargava, Batenchuk, and Kincaid fail to teach monoclonal antibodies or titering to a desired expression level.
Gaiser teaches a “anti-CD133 rabbit monoclonal antibody” that is used to stain a tissue sample. However, Gaiser does not teach creating a unique, virtual multiclonal antibody stain from the monoclonal antibody. Furthermore, Gaiser does not teach titering to a desired expression level.
Therefore, none of the other prior art searched or on the record teaches, suggests, or renders obvious the limitations of claim 6.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617