Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
In the Amendment dated 05 December 2025, the following occurred:
Claims 1, 3-5, 13, 15, and 16 were amended.
Claims 2, 6-8, 14, and 17-19 were canceled.
Claims 1, 3-5, 9-13, 15, 16, and 20 are pending.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(I). The following figure is unsatisfactory for reproduction because it contains text within shaded areas:
Fig. 4
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-5, 9-13, 15, 16, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1 and 13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
The claims recite systems, methods, and instrumentalities for automatic medical report generation, and therefore meet step 1.
Step 2A1
The limitations of (Claim 13 being representative) obtaining at least a first type of data associated with a medical procedure and a second type of data associated with the medical procedure; generating… first textual descriptions based on the first type of data, wherein the first type of data includes a video recording of the medical procedure…, and wherein the first textual descriptions are associated with multiple temporal levels, each of the multiple temporal levels corresponding to a respective time period of the medical procedure; generating… second textual descriptions based on the second type of data that is different than the first type of data, wherein the second textual descriptions are also associated with the multiple temporal levels; producing a raw medical report that describes the medical procedure based at least on the first textual descriptions and the second textual descriptions, wherein producing the raw medical report comprises: establishing a correspondence between one or more of the first textual descriptions and one or more of the second textual descriptions that are associated with each of the multiple temporal levels; concatenating the corresponding textual descriptions to form a per-temporal-level textual description for each of the multiple temporal levels; and aggregating the per-temporal-level textual description for each of the multiple temporal levels into the raw medical report; refining the raw medical report…, as drafted, is a process that, under the broadest reasonable interpretation, falls in the grouping of certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions).
That is, other than reciting an apparatus and method implemented by one or more processors (a general-purpose computing device), the claimed invention amounts to managing personal behavior or interaction between people. It represents a series of rule or instructions for the production of a medical report. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A2
This judicial exception is not integrated into a practical application. In particular, claim 1 recites the additional element of one or more processors that implement the identified abstract idea while Claim 13 is not performed by any particular technological environment. The processor is not exclusively described by the applicant and is recited at a high-level of generality (i.e., generic computer components) such that it amounts no more than mere instructions to apply the exception using a generic computer or components thereof. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim further recites the additional elements of using a first machine learning model, a second machine learning model, and a language learning model to produce a medical report. This represents mere instructions to implement the abstract idea on a generic computer. Implementing an abstract idea using a generic computer or components thereof does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. See, e.g., Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 at 10 (Fed. Cir. April 18, 2025) (finding that claims that do no more than apply established methods of machine learning to a new data environment are ineligible). The Examiner notes that the machine learning models are described in the Specification at Para. 0024 as encompassing natural language processing and computer vision capabilities. The Examiner further notes that an LLM is a trained ML/AI model. Alternatively, or in addition, the implementation of the machine learning models to the data merely confines the use of the abstract idea (i.e., the trained models) to a particular technological environment or field of use (the noted types of ML) and thus fails to add an inventive concept to the claims. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component cannot provide an inventive concept (“significantly more”).
As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a first machine learning model, a second machine learning model, and a language learning model were determined to represent “apply it” on a generic computer. This has been re-evaluated under the “significantly more” analysis and has also been found insufficient to provide significantly more. MPEP 2106.05(I)(A) indicates that merely saying “apply it” or equivalent to the abstract idea cannot provide an inventive concept (“significantly more”). Accordingly, even in combination, these additional elements do not provide significantly more. As such the claim is not patent eligible.
Claims 3-5, 9-12, 15, 16, and 20 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination.
Claims 3, 4, and 15 merely describe the second type of data, the second ML model (additional element), and what is generated, which further defines the abstract idea.
Claims 5 and 16 merely describes that data that is determined, which further defines the abstract idea.
Claims 9, 10, 11, and 20 merely describes refining the report and the LLM (additional element), which further defines the abstract idea.
Claim 12 merely describes determining data, which further defines the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5, 13, 15, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wolf et al. (U.S. 2023/0385946) in view of Farri et al. (U.S. 2024/0282419) and Mahyar et al. (U.S. 10999566), referred to hereinafter as Wolf, Farri, and Mahyar, respectively.
REGARDING CLAIM 1
Wolf teaches the claimed apparatus, comprising: one or more processors configured to: [Para. 0236 teaches one or more processors.]
obtain at least a first type of data associated with a medical procedure and a second type of data associated with the medical procedure; [Para. 0033 teaches obtaining video data (first type) associated with a surgical procedure. Para. 0223 teaches accessing an audio recording (second type) captured during the surgical procedure.]
generate, using a first machine learning (ML) model, first textual descriptions based on the first type of data, wherein the first type of data includes a video recording of the medical procedure, and the first ML model is configured to extract visual features from the video recording and generate the first textual descriptions based on the extracted visual features, and wherein the first textual descriptions are associated with multiple temporal levels, each of the multiple temporal levels corresponding to a respective time period of the medical procedure; [Para. 0033 teaches obtaining video data (first type) associated with a surgical procedure. Para. 0209 teaches generating, using a Large Language Model, a textual summary based on the video footage (first type). Para. 0054 teaches temporal features are extracted from the videos.]
generate, using a second ML model, second textual descriptions based on the second type of data that is different than the first type of data, wherein the second textual descriptions are also associated with the multiple temporal levels; [Para. 0223 teaches generating, using a speech recognition algorithm (a second ML model), textual content based on the audio recording (second type). Para. 0052 teaches temporal features are extracted from the audio data. Audio data is different than video data.]
Wolf may not explicitly teach
produce a raw medical report that describes the medical procedure based at least on the first textual descriptions and the second textual descriptions, wherein the processor being configured to produce the raw medical report comprises the processor being configured to:
and refine the raw medical report based on a large language model (LLM).
However, Farri teaches the following:
produce a raw medical report that describes the medical procedure based at least on the first textual descriptions and the second textual descriptions, wherein the processor being configured to produce the raw medical report comprises the processor being configured to: [Para. 0012 teaches transforming audio data into text. Para. 0064 teaches producing a medical report based on combining the visual data (first type) and audio data (second type) according to extracted time stamps (temporal levels). Para. 0083 teaches a processor.]
refine the raw medical report based on a large language model (LLM). [Para. 0037 teaches refining the generated verbatim report using natural language processing or deep learning (interpreted as the LLM of Wolf).]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, before the effective filling date of the invention, to modify the apparatus of Wolf to produce and refine a medical report as taught by Farri, with the motivation of improving patient outcome (see Farri at Para. 0015).
Wolf in view of Farri may not explicitly teach
establish a correspondence between one or more of the first textual descriptions and one or more of the second textual descriptions that are associated with each of the multiple temporal levels; concatenate the corresponding textual descriptions to form a per-temporal-level textual description for each of the multiple temporal levels; and
aggregate the per-temporal-level textual description for each of the multiple temporal levels into the raw medical report; and
However, Mahyar teaches the following:
establish a correspondence between one or more of the first textual descriptions and one or more of the second textual descriptions that are associated with each of the multiple temporal levels; concatenate the corresponding textual descriptions to form a per-temporal-level textual description for each of the multiple temporal levels; and [Col. 4, Line 64-67 teaches analyzing the individual textual descriptions, and aggregating and combining (interpreted as concatenating) individual textual descriptions to form a textual description for a scene (temporal level).]
aggregate the per-temporal-level textual description for each of the multiple temporal levels into the raw medical report; and [Col. 11, Line 25-27 teaches consolidating the descriptions to generate a textual description that is applicable to the entire video segment (all scenes/temporal levels).]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, before the effective filling date of the invention, to modify the apparatus of Wolf in view of Farri to consolidate descriptions as taught by Mahyar, with the motivation of reducing file size (see Mahyar at Col. 6, Line 35-38).
REGARDING CLAIM 3
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Wolf further teaches
wherein the second type of data includes an audio recording of the medical procedure, and the second ML model includes a speech recognition model configured to extract sound features from the audio recording and generate the second textual descriptions based on the extracted sound features. [Para. 0223 teaches an audio recording of the surgical procedure, and a speech recognition algorithm configured to obtain textual content.]
REGARDING CLAIM 4
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Wolf further teaches
wherein the second type of data includes patient vital signs, patient medical records, or logs of a device used during the medical procedure, and wherein the second ML model includes an ML model configured to extract features from the patient vital signs, the patient medical records, or the logs of the device used during the medical procedure, the second ML model further configured to map the extracted features to the second textual descriptions. [Para. 0209 teaches a Large Language Model is used to generate a (second) textual summary based on a medical record related to the patient.]
REGARDING CLAIM 5
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Farri further teaches
wherein the vision-language model is configured to determine, for each frame of the video recording, one or more region-wise tokens each indicative of a person or object detected in a corresponding region, [Para. 0140 teaches the video frames of the medical examination are passed as input into a video encoder that represents the pixels with vectors signifying the objects and persons in the clips. Para. 0141 teaches an input embedding refers to a vector representation of a token. Para. 0142 teaches positional embeddings, e.g., a vector representation of the absolute and/or relative position of a token in a sequence of tokens. The positional embeddings are combined with the input embeddings.]
and wherein, for each frame of the video recording, the vision-language model is further configured to determine a caption that describes the frame. [Para. 0140 teaches a vector representation of the associated caption for each video clip (e.g., including the video frames).]
REGARDING CLAIMS 13
Claim 13 is analogous to Claim 1, thus Claim 13 is similarly analyzed and rejected in a manner consistent with the rejection of Claim 1.
REGARDING CLAIM 15
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 13.
Wolf further teaches
wherein the second type of data includes an audio recording of the medical procedure, patient vital signs, patient medical records, or logs of a device used during the medical procedure, and wherein the second ML model includes an ML model configured to extract features from audio recording, the patient vital signs, the patient medical records, or the logs of the device used during the medical procedure, the second ML model further configured to map the extracted features to the second textual descriptions. [Para. 0223 teaches an audio recording of the surgical procedure, and a speech recognition algorithm configured to obtain textual content. Para. 0209 teaches a Large Language Model is used to generate a (second) textual summary based on a medical record related to the patient.]
REGARDING CLAIM 16
Claim 16 is analogous to Claim 5, thus Claim 16 is similarly analyzed and rejected in a manner consistent with the rejection of Claim 5.
REGARDING CLAIM 20
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 13.
Farri further teaches
wherein the LLM is configured to refine the raw medical report based on a predefined report structure or predefined report language. [Para. 0071 teaches converting the generated verbatim report into a medical examination report based on a trained neural network and/or deep learning.]
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wolf in view of Farri, Mahyar, and Schalkwyk et al. (U.S. 2023/0281248), referred to hereinafter as Schalkwyk.
REGARDING CLAIM 9
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Farri further teaches
[…], the LLM configured to refine the raw medical report based on a predefined report structure or predefined report language. [Para. 0071 teaches converting the generated verbatim report into a medical examination report based on a trained neural network and/or deep learning.]
Wolf in view of Farri and Mahyar may not explicitly teach
wherein the LLM utilizes a transformer architecture and has over one billion parameters…
However, Schalkwyk teaches the following:
wherein the LLM utilizes a transformer architecture and has over one billion parameters… [Para. 0048 teaches the pre-trained large language model is based on a Transformer model. The pre-trained neural network model includes over one-billion parameters.]
The Examiner notes that there is no indication in the claim as to what the parameters are or even if they are different than one another, indicating that the claim limitation is likely obvious in view of Wolf; however, Schalkwyk has been cited for completeness.
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, before the effective filling date of the invention, to modify the apparatus of Wolf in view of Farri and Mahyar to utilize a transformer model and include over one-billion parameters as taught by Schalkwyk, with the motivation of generalizing better and achieving significantly better performance (see Schalkwyk at Para. 0048).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wolf in view of Farri, Mahyar, and Vianu et al. (U.S. 2020/0334416), referred to hereinafter as Vianu.
REGARDING CLAIM 10
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Wolf in view of Farri and Mahyar may not explicitly teach
wherein the LLM is pre-trained to detect abnormalities in the raw medical report, and wherein the one or more processors being configured to refine the raw medical report based on the LLM comprises the one or more processors being configured to provide an indication of the abnormalities detected in the raw medical report.
However, Vianu teaches the following:
wherein the LLM is pre-trained to detect abnormalities in the raw medical report, and wherein the one or more processors being configured to refine the raw medical report based on the LLM comprises the one or more processors being configured to provide an indication of the abnormalities detected in the raw medical report. [Para. 0016 teaches detecting errors in the report text. Para. 0058 teaches indicating errors detected in the report.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, before the effective filling date of the invention, to modify the apparatus of Wolf in view of Farri and Mahyar to detect and indicate abnormalities in the medical report as taught by Vianu, with the motivation of bettering the ability to catch misspellings and other errors (see Vianu at Para. 0159).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Wolf in view of Farri, Mahyar, and Douglas et al. (U.S. 2020/0126678), referred to hereinafter as Douglas.
REGARDING CLAIM 11
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Wolf in view of Farri and Mahyar may not explicitly teach
wherein the LLM is pre-trained to replace a medical terminology included in the raw medical report with descriptive texts, and wherein the one or more processors being configured to refine the raw medical report based on the LLM comprises the one or more processors being configured to replace the medical terminology with the descriptive texts.
However, Douglas teaches the following:
wherein the LLM is pre-trained to replace a medical terminology included in the raw medical report with descriptive texts, and wherein the one or more processors being configured to refine the raw medical report based on the LLM comprises the one or more processors being configured to replace the medical terminology with the descriptive texts. [Para. 0065 teaches generating a list of (medical) terminology for information fields which require secondary descriptive (texts) terminology. The secondary descriptive terminology is entered into the medical report.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, before the effective filling date of the invention, to modify the apparatus of Wolf in view of Farri and Mahyar to replace medical terminology with descriptive text as taught by Douglas, with the motivation of improving patient care (see Douglas at Para. 0004).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Wolf in view of Farri, Mahyar, and Netravali et al. (U.S. 2020/0126678), referred to hereinafter as Douglas.
REGARDING CLAIM 12
Wolf in view of Farri and Mahyar teaches the claimed apparatus of claim 1.
Wolf in view of Farri and Mahyar may not explicitly teach
wherein the LLM is pre-trained to determine, based on the first type of data or the second type of data, standard operations associated with the medical procedure and actual operations being performed in the medical procedure, and wherein the one or more processors are further configured to detect inconsistency between the actual operations and the standard operations, and provide an indication of the inconsistency.
However, Netravali teaches the following:
wherein the LLM is pre-trained to determine, based on the first type of data or the second type of data, standard operations associated with the medical procedure and actual operations being performed in the medical procedure, and wherein the one or more processors are further configured to detect inconsistency between the actual operations and the standard operations, and provide an indication of the inconsistency. [Para. 0177 teaches using a machine learning model trained using examples comprising sample inputs of image and/or video frames (first type of data) of a sample surgical procedure and labels. Para. 0143 teaches comparing actual activity to the surgical plan (standard operations), and depicting any deviations that occurred.]
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of computerized healthcare, before the effective filling date of the invention, to modify the apparatus of Wolf in view of Farri and Mahyar to detect and provide an indication of inconsistency between the actual operations and the standard operations as taught by Netravali, with the motivation of improving the chance of successful clinical outcomes and lessening the economic burden (see Netravali at Para. 0128).
Response to Arguments
Objections to the Drawings
Regarding the objection to Figure 2A, the Applicant has provided a Replacement Sheet such that an objection is no longer required. The objection has been withdrawn.
Regarding the objection to Figure 4, the provided Replacement Sheet is insufficient. See objection.
Rejection under 35 U.S.C. § 101
Regarding the rejection of Claims 1, 3-5, 9-13, 15, 16, and 20, the Examiner has considered the Applicant’s arguments; however, the arguments are not persuasive. Applicant argues:
…claim 1 is directed toward automatic report generation, which is acknowledged by the Office Action, and the claim never recites any person-machine interaction during the report generation process.
Regarding (a), the Examiner respectfully disagrees. MPEP 2106. 04(a)(2)(II) states that a claimed invention is directed to certain methods of organizing human activity if the identified claim elements contain limitations that encompass fundamental economic principles or practices, commercial or legal interactions, or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The Examiner submits that the identified claim elements represent a series of rules or instructions that a person or persons, with the aid of a computer, would follow to generate a medical report. The Examiner notes that Applicant’s Background describes the generation of medical reports (see Spec. Para. 0002) as a human task. Applicant has not pointed to anything in the claims that fall outside of this characterization. Because the claim elements fall under a series of rules or instructions that a person or persons would follow to generate a medical report, the claimed invention is directed to an abstract idea.
Further, multiple CAFC decisions that the Office has characterized as Certain Method of Organizing Human Activity did not actively recite a person or persons performing the steps of the claims (see, e.g., EPG, TLI communications, Ultramercial). Because whether a human is required to perform the step of the claim is not a requirement for claims to encompass certain method of organizing human activity, this argument is not persuasive.
Finally, that the claim purportedly performs the steps “automatically” (the word automatically not appearing in the claims at all) does not remove it from being directed to Certain Method of Organizing Human Activity. Humans perform actions automatically all the time. When you get in your car to go somewhere you automatically turn the engine on. Even assuming that this is not true (which it is), performing the steps “automatically” is a consequence of confining the abstract idea to a computer.
…the features of claim 1… provide a significant improvement to existing technologies... These concrete steps improve the process of automatic medical report generation by utilizing ML models configured to process and correlate multimodal data (e.g., from heterogeneous data sources), thus not only reducing the manual efforts traditionally involved in the process, but also enhancing the diversity of contents in the medical report.
Regarding (b), the Examiner respectfully disagrees. The Examiner notes the Applicant concedes within this argument that the process of medical report generation is a manual effort, and therefore a human activity. MPEP 2106.04(d)(1) states “the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” Here, there is no improvement to the computer nor is there an improvement to another technology. Because neither type of improvement is present in the claims, an improvement to technology is not present and there is no practical application.
Applicant’s argument that the field of medical report generation is a technology and the claimed invention improves this field is not reflected in the claimed invention. The entire field of medical report generation is not reasonably understood to be a problem arising in technology, as it is instead a problem arising in healthcare. The claimed invention is using a computer as a tool and any improvement present is an improvement to the abstract idea of, to paraphrase, generating medical reports. Finally, where Applicant’s line of reasoning correct, the invention in Alice Corp. would have been subject matter eligible because it was an improvement to the technology of settlement risk mitigation.
Rejections under 35 U.S.C. § 103
Regarding the rejection of Claims 1, 3-5, 9-13, 15, 16, and 20, the Examiner has considered Applicant’s arguments; however, the arguments are moot given the new grounds of rejection as necessitated by amendment. The Examiner notes that the features added to Claim 1 are of a different scope than the now-cancelled claims.
Conclusion
Prior art made of record though not relied upon in the present basis of rejection are noted in the attached PTO 892 and include:
Achara et al. (U.S. 2025/0166747) which discloses a system for automated clinical trial matching.
Wolf et al. (U.S. 10886015) which discloses a system for providing decision support to a surgeon.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAMRYN B LEWIS whose telephone number is (703)756-1807. The examiner can normally be reached Monday - Friday, 11:00 am - 8:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on 571-272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAMRYN B LEWIS/
Examiner, Art Unit 3683
/JASON S TIEDEMAN/Primary Examiner, Art Unit 3683