DETAILED ACTION
Claims 24-26 have been added.
Claims 1, 3, 6-13, 16-19 and 21-26 are currently pending.
Response to Arguments
Applicant's arguments filed 2/18/26 have been fully considered but they are not persuasive.
The Applicant argues on pages 12-13 of the response in essence that: It is plainly not practical to mentally perform steps such as "applying a second machine learning system to determine a mutational signature ratio vector for the one or more extracted visual features," " defining, based on the mutational signature ratio vector, a new tumor subtype signature by: generating a subtype definition distinct from one or more stored known mutational signatures, when the mutational signature ratio vector does not fall within a known category of mutational signatures," and "updating parameters of the second machine learning system based on the mutational signature ratio vector and the new tumor subtype signature." Accordingly, Applicant respectfully requests that the "Mental Processes" rejection be withdrawn.
Examining medical images for tumor signatures has historically been performed mentally by a doctor. While the claims further include a machine learning system, the limitations are generic computer functions that are well understood, routine and conventional activities previously known in the industry. Claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP § 2106.04(a)(2)(III)(C). While the Applicant contends that it is plainly not practical to mentally perform the recited limitations, the Applicant has not shown that a doctor cannot perform the recited limitations mentally.
The Applicant argues on pages 15-16 of the response in essence that: Hosseini is not identifying mutational signatures, but rather the tissue types of individual patches within an image. Hosseini is thus silent to and does not disclose "defining, based on the mutational signature ratio vector, a new tumor subtype signature by: generating a subtype definition distinct from one or more stored known mutational signatures, when the mutational signature ratio vector does not fall within a known category of mutational signatures" nor "updating parameters of the second machine learning system based on the mutational signature ratio vector and the new tumor subtype signature," as recited in claim 1.
Hosseini discloses identifying abnormal tissue regions that are cancerous tumors (paragraph 179). Cancerous tumors are mutational signatures. Kozlowski discloses updating parameters of the second machine learning system based on the mutational signature ratio vector and the new tumor subtype signature (paragraph 85, The generated classifications can be verified by human agents and, should correction be needed, the digital pathology image processing system 310 (e.g., deep-learning neural network) can be retrained using the new data. This cycle can repeat, with the expectation that viewer interventions will be required to improve the accuracy rate on previously unseen examples. Additionally, once a specified level of accuracy has been reached, the labels generated by the digital pathology image processing system 310 can be used as a ground-truth for training).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 3, 6-13, 16-19 and 21-26 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventors, at the time the application was filed, had possession of the claimed invention.
Claim 1 recites “updating parameters of the second machine learning system based on the mutational signature ratio vector and the new tumor subtype signature.” The Applicant’s specification discusses saving and/or outputting the trained machine learning system (e.g., learned parameters of a neural network of the trained machine learning system) to electronic storage (paragraph 95). However, the Applicant’s specification does not discuss updating parameters based on the new tumor subtype signature. Claims 13 and 19 contain similar limitations and are likewise rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 6-13, 16-19 and 21-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims recite the abstract idea of detecting tumors in images. The claimed invention is similar to other claims which have been found directed to ineligible subject matter. See Dental Monitoring SAS v. Align Tech., Inc., No. C 22-07335 WHA, 2024 BL 169235 (N.D. Cal. May 16, 2024) (Claims directed to computer systems designed to improve the analysis of medical images for diagnosing and monitoring cancer and other diseases were found ineligible).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims simply automate the process traditionally used by physicians to analyze medical images to identify regions of cancer risk. The steps of “identifying, one or more neoplasms in each received digital image”, “extracting one or more visual features from each identified neoplasm, wherein the extracted visual features are neoplasm embeddings” “identifying a mutational signature ratio vector for the one or more extracted visual features”, “determining, for the mutational signature ratio vector, whether a largest value in the mutational signature ratio vector is below a predetermined certainty threshold”, “upon determining that the largest value in the mutational signature ratio vector is below the predetermined certainty threshold, defining based on the mutational signature ratio vector, a new tumor subtype signature by generating a subtype definition distinct from one or more stored known mutational signatures, when the mutational signature ratio vector does not fall within a known category of mutational signatures” and “flagging the at least one patient and corresponding neoplasms as having an unknown mutational signature” do not preclude the steps from being performed in the human mind. Furthermore, “patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 (Fed. Cir. Apr. 18, 2025).
The limitations of “receiving one or more digital images into electronic storage for at least one patient”, “applying a trained machine learning system to determine a mutational signature ratio vector for the one or more extracted visual features”, “saving the new tumor subtype signature with corresponding visual features, of the extracted visual features, to storage“ and “updating parameters of the second machine learning system based on the mutational signature ratio vector and the new tumor subtype signature” are generic computer functions that are well understood, routine and conventional activities previously known in the industry. That is, other than reciting “by a processor,” nothing in the claim precludes the steps from practically being performed in the human mind.
The limitations “outputting an indication of the unknown mutational signature as a spatially organized representation to a display and/or storage” and “producing an output overlay of the one or more digital images, the output overlay including an identifier of the unknown mutational signature“ are insignificant extra-solution activity. See MPEP § 2106.05(g). The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional.
The dependent claims, such as those including the steps of “segmenting” or “clustering”, likewise do not preclude the steps from being performed in the human mind. The dependent claims recite activity traditionally used by physicians to analyze medical images to identify regions of cancer risk.
The claims do not provide an inventive concept as they do not provide an any improvement in machine learning technology. Neither the claims nor the specification details a novel algorithm or otherwise discloses an improvement to machine learning. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea.
Claims 24-26 include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite improvements to the machine learning models to be applied, and are not considered generic machine learning.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 9, 10, 12, 13 and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Kozlowski et al. US Publication 2023/0162515 as applied to claims 5, 13 and 19 above, and further in view Hosseini et al. US Publication 2020/0349707 (hereafter “Hosseini”).
Referring to claims 1, 13 and 19, Kozlowski discloses a computer-implemented method for identifying a mutational signature, comprising:
receiving one or more digital images into electronic storage for at least one patient (paragraph 88, The method 700 can begin at step 710, where the digital pathology image processing system 310 receives or otherwise accesses a digital pathology image of a tissue sample);
identifying, by a first machine learning model, one or more neoplasms in each received digital image (paragraph 90, At step 720, the digital pathology image processing system 310 identifies and classifies one or more image features (e.g., histologies, mutations, etc.) in each of the patches);
extracting one or more visual features from each identified neoplasm (paragraph 90, at step 825, generates one or more labels tor the one or more image features identified in each patch of the digital pathology image using a machine learning model, where each label may indicate a particular type of condition (e.g., cancer type, type of tumor cell, mutation type, etc.) in the tissue sample); and
applying a second machine learning system to determine a mutational signature ratio vector for the one or more extracted visual features (paragraph 92, At step 735, the digital pathology image processing system 310 computes a heterogencity metric using live labels generated in step 725. As another example, the heterogeneity metric may indicate for the various mutations or gene variants identified in FIG. 2A. the percentage of each of the KRAS and FGFR mutations in a given tissue sample of a subject or patient. As another example, the heterogeneity metric may provide a metric quantifying a degree or magnitude of heterogeneity or otherwise contribute to such a metric); and
outputting an indication of the mutational signature as a spatially organized representation to a display and/or storage (paragraph 45, In some embodiments, the output generating module 336 may provide the subject assessment 280 for display to a user); and
updating parameters of the second machine learning system based on the mutational signature ratio vector and the new tumor subtype signature (paragraph 85, The generated classifications can be verified by human agents and, should correction be needed, the digital pathology image processing system 310 (e.g., deep-learning neural network) can be retrained using the new data. This cycle can repeat, with the expectation that viewer interventions will be required to improve the accuracy rate on previously unseen examples. Additionally, once a specified level of accuracy has been reached, the labels generated by the digital pathology image processing system 310 can be used as a ground-truth for training).
While Kozlowski discloses extracting visual features, Kozlowski does not disclose expressly that those visual features are neoplasm embeddings. While Kozlowski discloses identifying a mutational signature ratio vector, Kozlowski does not disclose expressly determining whether a largest value in the mutational signature ratio vector is below a predetermined certainty threshold.
Hosseini discloses extracting one or more visual features from each identified neoplasm, wherein the extracted visual features are neoplasm embeddings (paragraph 189, The feature extractor can generate a feature vector 480 that represents the tissue segments identified in the training image patch at an abstract level);
determining, for the mutational signature ratio vector, whether a largest value in the mutational signature ratio vector is below a predetermined certainty threshold (paragraph 194, However, when the feature classifier determines that the confidence score is below the confidence threshold, the feature classifier can indicate that the identified tissue type is incorrect);
upon determining that the largest value in the mutational signature ratio vector is below the predetermined certainty threshold, defining, based on the mutational signature ratio vector, a new tumor subtype signature by:
generating a subtype definition distinct from one or more stored known mutational signatures, when the mutational signature ratio vector does not fall within a known category of mutational signatures (paragraph 203, when the image diagnostic system 110 determines that the sample confidence score 572 is not within the acceptable range of the class confidence score 570, the image diagnostic system 110 can determine that the sample tissue is abnormal and generate a notification requiring further review of the sample tissue by a pathologist);
flagging the at least one patient and corresponding neoplasms as having an unknown mutational signature (paragraph 203, when the image diagnostic system 110 determines that the sample confidence score 572 is not within the acceptable range of the class confidence score 570, the image diagnostic system 110 can determine that the sample tissue is abnormal and generate a notification requiring further review of the sample tissue by a pathologist); and
outputting an indication of the unknown mutational signature as a spatially organized representation to a display and/or storage (paragraph 216, The class activation map can provide a visual representation of the confidence scores generated by the feature classifier without requiring the tissue type labels);
saving the new tumor subtype signature with corresponding visual features, of the extracted visual features, to storage (paragraph 255, At 1024, the image diagnostic system 110 stores the identified abnormal image patches in the data storage 114, 108. The identified abnormal image patches can be stored in a separate, dedicated databases provided by the data storage 114, 108, in some embodiments).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to extract neoplasm embeddings and to determine whether a largest value in the mutational signature ratio vector is below a predetermined certainty threshold. The motivation for doing so would have been to increase the accuracy and efficiency of identifying tumors and to inform the user whether the diagnosis is accurate. Therefore, it would have been obvious to combine Hosseini with Kozlowski to obtain the invention as specified in claims 1, 13 and 19.
Referring to claim 3, Kozlowski discloses wherein identifying one or more neoplasms includes segmenting each received digital image into subregions (paragraph 89, At step 715, the digital pathology image processing system 310 subdivide the image into patches).
Referring to claim 9, Kozlowski discloses wherein receiving one or more digital images for at least one patient includes receiving a plurality of digital images for a plurality of patients (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients), wherein applying the second machine learning system to identify the mutational signature ratio vector includes identifying a plurality of mutational signature ratio vectors (paragraph 92, At step 735, the digital pathology image processing system 310 computes a heterogencity metric using live labels generated in step 725. As another example, the heterogeneity metric may indicate for the various mutations or gene variants identified in FIG. 2A. the percentage of each of the KRAS and FGFR mutations in a given tissue sample of a subject or patient. As another example, the heterogeneity metric may provide a metric quantifying a degree or magnitude of heterogeneity or otherwise contribute to such a metric), and the method further includes:
receiving patient information for each patient among the plurality of patients (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients);
determining a set of patients who have similar clinical phenotypes (paragraph 98, In particular embodiments, this includes receiving image data of tissue samples from non-small cell lung cancer (NSCLC) patients); and
determining disease subtypes based on the identified mutational signature ratio vectors of the determined set of patients (paragraph 93, At step 740, the digital pathology image processing system 310 generates a subject assessment based on the computed heterogeneity or the heterogeneity metric. The subject assessment can include, as an example and not limitation, a subject diagnosis, prognosis, treatment recommendation, or other similar assessment based on the heterogeneity of the features in the digital pathology image).
Referring to claim 10, Kozlowski discloses wherein receiving one or more digital images for at least one patient includes receiving a plurality of digital images for a plurality of (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients), wherein applying the second machine learning system to identify the mutational signature ratio vector includes identifying a plurality of mutational signature ratio vectors (paragraph 92, At step 735, the digital pathology image processing system 310 computes a heterogencity metric using live labels generated in step 725. As another example, the heterogeneity metric may indicate for the various mutations or gene variants identified in FIG. 2A. the percentage of each of the KRAS and FGFR mutations in a given tissue sample of a subject or patient. As another example, the heterogeneity metric may provide a metric quantifying a degree or magnitude of heterogeneity or otherwise contribute to such a metric), and the method further includes:
receiving treatment information for each patient among the plurality of patients; and
training a machine learning system that predicts a treatment response based on the identified mutational signature ratio vectors and received treatment information (paragraph 93, At step 740, the digital pathology image processing system 310 generates a subject assessment based on the computed heterogeneity or the heterogeneity metric. The subject assessment can include, as an example and not limitation, a subject diagnosis, prognosis, treatment recommendation, or other similar assessment based on the heterogeneity of the features in the digital pathology image).
Referring to claim 12, Kozlowski discloses wherein receiving one or more digital images for at least one patient includes receiving a plurality of digital images for a plurality of (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients), wherein applying the second machine learning system to identify the mutational signature ratio vector includes identifying a plurality of mutational signature ratio vectors (paragraph 92, At step 735, the digital pathology image processing system 310 computes a heterogencity metric using live labels generated in step 725. As another example, the heterogeneity metric may indicate for the various mutations or gene variants identified in FIG. 2A. the percentage of each of the KRAS and FGFR mutations in a given tissue sample of a subject or patient. As another example, the heterogeneity metric may provide a metric quantifying a degree or magnitude of heterogeneity or otherwise contribute to such a metric), and the method further includes:
clustering extracted visual features for the plurality of patients (paragraph 102, t step 830, the digital pathology image processing system 310 may train a machine-learning model (e.g., deep-learning neural network) based on the labeled set of patches. Once trained, the machine-learning model may be able to classify tissue patches using whole-slide level labels).
Kozlowski does not disclose expressly determining a mutational signature ratio vector corresponds to an unknown mutagen.
Hosseini discloses determining a mutational signature ratio vector among the identified mutational signature ratio vectors correspond to an unknown mutagen (paragraph 194, However, when the feature classifier determines that the confidence score is below the confidence threshold, the feature classifier can indicate that the identified tissue type is incorrect).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to identify an unknown mutational signature. The motivation for doing so would have been to inform the user whether the diagnosis is accurate. Therefore, it would have been obvious to combine Hosseini with Kozlowski to obtain the invention as specified in claim 12.
Referring to claims 21-23, Hosseini discloses producing an output overlay of the one or more digital images, the output overlay including an identifier of the unknown mutational signature (paragraph 203, when the image diagnostic system 110 determines that the sample confidence score 572 is not within the acceptable range of the class confidence score 570, the image diagnostic system 110 can determine that the sample tissue is abnormal and generate a notification requiring further review of the sample tissue by a pathologist) (paragraph 216, The class activation map can provide a visual representation of the confidence scores generated by the feature classifier without requiring the tissue type labels).
Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kozlowski et al. US Publication 2023/0162515 and Hosseini et al. US Publication 2020/0349707 as applied to claim 1 and 13 above, and further in view Tomasetti et al. US Publication 2022/0301710 (hereafter “Tomasetti”).
Referring to claims 6 and 16, Kozlowski discloses wherein receiving one or more digital images for at least one patient includes receiving a plurality of digital images for a plurality of patients (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients), wherein applying the second machine learning system to identify the mutational signature ratio vector includes identifying a plurality of mutational signature ratio vectors (paragraph 92, At step 735, the digital pathology image processing system 310 computes a heterogencity metric using live labels generated in step 725. As another example, the heterogeneity metric may indicate for the various mutations or gene variants identified in FIG. 2A. the percentage of each of the KRAS and FGFR mutations in a given tissue sample of a subject or patient. As another example, the heterogeneity metric may provide a metric quantifying a degree or magnitude of heterogeneity or otherwise contribute to such a metric).
Kozlowski does not disclose expressly identifying a set of patients that have an unknown mutational signature.
Tomasetti discloses wherein the method further comprises identifying a set of patients (paragraph 138, The significance of the signatures can be assessed by their ability to distinguish between groups of patients, i.e. exposed vs unexposed, or younger vs older patients) among the plurality of patients that have an unknown mutational signature (paragraph 113, this approach could lead to the detection of a sizable fraction of mutations that cannot be attributed to any known source, potentially leading to new insights into pathogenesis, and in particular, avoidable pathogenic agents).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to identify a set of patients that have an unknown mutational signature. The motivation for doing so would have been to identify potential patterns that can be studied to become known mutational signatures. Therefore, it would have been obvious to combine Tomasetti with Kozlowski to obtain the invention as specified in claim 6.
Referring to claims 7 and 17, Kozlowski discloses clustering extracted visual features for the identified set of patients (paragraph 102, t step 830, the digital pathology image processing system 310 may train a machine-learning model (e.g., deep-learning neural network) based on the labeled set of patches. Once trained, the machine-learning model may be able to classify tissue patches using whole-slide level labels).
Referring to claims 8 and 18, Kozlowski discloses receiving patient information for each patient among the plurality of patients (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients); and
determining, based on the received patient information and clustered extracted visual features, whether any of the signatures are associated with mutagens (paragraph 93, At step 740, the digital pathology image processing system 310 generates a subject assessment based on the computed heterogeneity or the heterogeneity metric. The subject assessment can include, as an example and not limitation, a subject diagnosis, prognosis, treatment recommendation, or other similar assessment based on the heterogeneity of the features in the digital pathology image).
Tomasetti discloses determining, based on the received patient information and clustered extracted visual features, whether any of the unknown mutational signatures of the set of patients are associated with mutagens (paragraph 113, this approach could lead to the detection of a sizable fraction of mutations that cannot be attributed to any known source, potentially leading to new insights into pathogenesis, and in particular, avoidable pathogenic agents).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kozlowski et al. US Publication 2023/0162515 and Hosseini et al. US Publication 2020/0349707 as applied to claim 1 above, and further in view Hafez et al. US Publication 2022/0148736 (hereafter “Hafez”).
Referring to claim 11, Kozlowski discloses wherein receiving one or more digital images for at least one patient includes receiving a plurality of digital images for a plurality of (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients), wherein applying the trained machine learning system to identify the mutational signature ratio vector includes identifying a plurality of mutational signature ratio vectors (paragraph 92, At step 735, the digital pathology image processing system 310 computes a heterogencity metric using live labels generated in step 725. As another example, the heterogeneity metric may indicate for the various mutations or gene variants identified in FIG. 2A. the percentage of each of the KRAS and FGFR mutations in a given tissue sample of a subject or patient. As another example, the heterogeneity metric may provide a metric quantifying a degree or magnitude of heterogeneity or otherwise contribute to such a metric), and the method further includes:
receiving patient information for each of the plurality of patients (paragraph 98, The method 800 can begin at step 910, where the digital pathology image processing system 310 accesses a plurality of digital pathology images that are respectively associated with a plurality of subjects or patients).
Kozlowski does not disclose expressly determining whether any of the mutational signature ratio vectors are associated with certain geographic locations.
Hafez discloses receiving an indication of a geographic location of each patient (paragraph 56, A clinical module (not shown) may comprise a feature collection associated with information derived from clinical records of a patient, which can include records from family members of the patient. These may be abstracted from unstructured clinical documents, EMR, EHR, or other sources of patient history. Information may include patient symptoms, diagnosis, treatments, medications, therapies, hospice, responses to treatments, laboratory testing results, medical history, geographic locations of each, demographics, or other features of the patient which may be found in the patient's medical record); and
determining, based on the received indications of the geographic locations, whether any of the mutational signature ratio vectors are associated with certain geographic locations (paragraph 104, At step 630, the system may calculate outcome targets for a horizon window and outcome event. Outcome events may be the objectives, and horizon windows may be the time periods such that an objective/target pair is calculated).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to determine whether any of the mutational signature ratio vectors are associated with certain geographic locations. The motivation for doing so would have been to improve the accuracy of the patient’s diagnosis. Therefore, it would have been obvious to combine Hafez with Kozlowski to obtain the invention as specified in claim 11.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER K HUNTSINGER/Primary Examiner, Art Unit 2682