DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This communication is in response to the Amendment filed on January 22, 2026, in which claims 1, 11 and 20 have been amended. Accordingly, claims 1-20 remain for examination.
Status of Claims
3. Claims 1-20 are pending, of which claims 1-5, 8-15 and 18-20 are rejected under 35 U.S.C. 103.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
7. Claims 1, 2, 8-12 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over SIMARIA et al. (United States Patent Application Publication No. US 2025/0077551 A1), hereinafter “SIMARIA” in view of Das et al. (United States Patent Application Publication No. US 2022/0309250 A1), hereinafter “Das”.
As to claim 11, SIMARIA discloses an apparatus, comprising (environment 200 (See FIG. 2) includes large language model (LLM) system 201 executing within cloud computing system 202, and having computing hardware 203) (SIMARIA, FIG. 2, paragraphs [0089] and [0091]]): one or more network interfaces (including networking components 209 (e.g., communication components)) (SIMARIA, FIG. 2, paragraph [0091]); a processor coupled to the one or more network interfaces and configured to execute one or more processes (processors 207 for executing processes) (SIMARIA, FIG. 2, paragraph [0091]); and a memory configured to store a process that is executable by the processor (memory 208 for storing process) (SIMARIA, FIG. 2, paragraph [0091]), the process when executed configured to: receive an output of a language model-based troubleshooting agent to perform an intermediate step of a troubleshooting task with respect to a computer network (particularly executing the process of FIG. 5, one or more process blocks of which may be performed by LLM system (e.g., the large language model system 201). More particularly, process 500 includes providing, to a large language model and via an interface, a request to troubleshoot a network device, and receiving, from the large language model and based on the request, a response identifying one or more issues associated with the network device. That is, a user of the network device may wish to utilize the large language model to request troubleshooting of the network device. In such implementations, the network device may generate a request to troubleshoot the network device, and may provide, to an LLM system and via an API, the request to troubleshoot the network device. The LLM system may receive the request to troubleshoot the network device, and may utilize the large language model to generate, based on the request, instructions that cause the network device to correct one or more issues associated with the network device. The LLM system may provide, to the network device, the instructions that cause the network device to correct the one or more issues associated with the network device, and the network device may receive (e.g., via the API) the instructions that cause the network device to correct the one or more issues associated with the network device) (SIMARIA, FIG. 5, paragraphs [0081]-[0082], [0115] and [0122]). SIMARIA does not explicitly disclose to determine a level of quality of the output indicative of how well the output is expected to perform the intermediate step; generate an instruction for the language model-based troubleshooting agent, when the level of quality of the output is below a threshold; and request that the language model-based troubleshooting agent perform the task using the instruction. However in an analogous art, Das discloses to determine a level of quality of an output indicative of how well the output is expected to perform an intermediate step (wherein a troubleshooting chatbot outputs numerous suggestions/responses to a user during the course of a troubleshooting workflow. In particular, at block 560 (See FIG. 5), the chatbot may initiate an automated, interactive, troubleshooting conversational dialog with the user. During the course of the troubleshooting workflow, a determination is made regarding the quality of the responses (even though Das does not explicitly use the language “quality”), after which it is determined to enlist/escalate to a live human agent) (Das, FIG. 5, paragraphs [0065]-[0066]); generate an instruction for a language model-based troubleshooting agent, when the level of quality of the output is below a threshold (wherein again, if the quality of the workflow is below a certain level (i.e., if the responses are not resolving the customer’s issue), the product support case is escalated to the live human agent (See in particular, FIG. 5, block 550)) (Das, FIG. 5, paragraph [0066]); and request that the language model-based troubleshooting agent perform the task using the instruction (again, chatbot is instructed to escalate to human agent) (Das, paragraph [0066]). SIMARIA is analogous art because SIMARIA is from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs) (See SIMARIA, paragraph [0081]), while Das is analogous art, because Das is reasonably pertinent to the particular problem with which the inventor was concerned, as Das is directed to natural language-based troubleshooting chatbots and their classification models (See Das, paragraphs [0001] and [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA and Das before him or her, to modify the large language model system 201 of SIMARIA to include the additional limitations of to determine a level of quality of the output indicative of how well the output is expected to perform the intermediate step; generate an instruction for the language model-based agent, when the level of quality of the output is below a threshold; and request that the language model-based agent perform the task using the instruction, as disclosed in Das, with reasonable expectation that this would result in a large language model system 201 having the added benefit of reliably knowing when to escalate a troubleshooting issue to a live human agent, thereby ensuring that the concern of the user is properly addressed, while also enabling the underlying classification models on which the troubleshooting agent relies to additionally learn from the escalated cases (See Das, paragraphs [0017] and [0028]). This method of improving the large language model system 201 of SIMARIA was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Das. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA with Das to obtain the invention as specified in claim 11.
Claim 20 is directed to a “tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process” substantially as recited in “apparatus” claim 11 above, and does not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, as SIMARIA-Das discloses such a “tangible, non-transitory, computer-readable medium” (non-transitory computer-readable medium that stores a set of instructions) (SIMARIA, paragraphs [0004] and [0101]), claim 20 is rejected under the same rationale.
In addition, claim 1 includes a “method” claim that performs limitations substantially as recited in “apparatus” claim 11, and does not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, it is rejected under the same rationale.
Regarding claim 12, SIMARIA-Das discloses the apparatus as in claim 11, wherein the language model-based troubleshooting agent uses a large language model to generate the output (again, using large language models (LLMs)) (SIMARIA, paragraphs [0011]-[0012]). The motivation regarding the obviousness of claim 11 is also applied to claim 12.
Regarding claim 18, SIMARIA-Das discloses the apparatus as in claim 11, wherein the apparatus determines the level of quality of the output by: determining a confidence measure regarding whether the output will complete the task (wherein training the intermediate classification models includes determining whether the intermediate classification model’s confidence in its predicted labels is less than a desired probability threshold) (Das, paragraph [0057]). As discussed and shown above, SIMARIA is analogous art because SIMARIA is from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs) (See SIMARIA, paragraph [0081]), while Das is analogous art, because Das is reasonably pertinent to the particular problem with which the inventor was concerned, as Das is directed to natural language-based troubleshooting chatbots and their classification models (See Das, paragraphs [0001] and [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA and Das before him or her, to modify the large language model system 201 of SIMARIA to include the additional limitation of wherein the apparatus determines the level of quality of the output by: determining a confidence measure regarding whether the output will complete the task, as disclosed in Das, with reasonable expectation that this would result in a large language model system 201 having the added benefit of efficiently training the classification models used by the troubleshooting chatbot/agent to a degree of accuracy (See Das, paragraphs [0057] and [0028]). This method of improving the large language model system 201 of SIMARIA was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Das. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA with Das to obtain the invention as specified in claim 18.
Regarding claim 19, SIMARIA-Das discloses the apparatus as in claim 11, wherein the process when executed is further configured to: provide the output of the language model-based troubleshooting agent and the instruction to a user interface for review by a user (wherein historical records are reviewed by subject matter experts (SMEs)) (Das, paragraphs [0047] and [0058]). Again, SIMARIA is analogous art because SIMARIA is from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs) (See SIMARIA, paragraph [0081]), while Das is analogous art, because Das is reasonably pertinent to the particular problem with which the inventor was concerned, as Das is directed to natural language-based troubleshooting chatbots and their classification models (See Das, paragraphs [0001] and [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA and Das before him or her, to modify the large language model system 201 of SIMARIA to include the additional limitation of providing the output of the language model-based troubleshooting agent and the instruction to a user interface for review by a user, as disclosed in Das, with reasonable expectation that this would result in a large language model system 201 having the added benefit of having SMEs provide their input and expertise into the models (See Das, paragraph [0047]). This method of improving the large language model system 201 of SIMARIA was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Das. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA with Das to obtain the invention as specified in claim 19.
Claims 2, 8 and 9 include “method” claims that perform limitations substantially as recited in “apparatus” claims 12, 18 and 19, respectively, and do not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, they are rejected under the same rationale.
As to claim 10, SIMARIA-Das discloses the method as in claim 9, further comprising: providing, by the device, an outcome of the task to the user interface for review by the user (wherein again, historical records are reviewed by subject matter experts (SMEs)) (Das, paragraphs [0047] and [0058]). Again, SIMARIA is analogous art because SIMARIA is from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs) (See SIMARIA, paragraph [0081]), while Das is analogous art, because Das is reasonably pertinent to the particular problem with which the inventor was concerned, as Das is directed to natural language-based troubleshooting chatbots and their classification models (See Das, paragraphs [0001] and [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA and Das before him or her, to modify the large language model system 201 of SIMARIA to include the additional limitation of providing, by the device, an outcome of the task to the user interface for review by the user, as disclosed in Das, with reasonable expectation that this would result in a large language model system 201 having the added benefit of having SMEs provide their input and expertise into the models (See Das, paragraph [0047]). This method of improving the large language model system 201 of SIMARIA was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Das. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA with Das to obtain the invention as specified in claim 10.
8. Claims 3, 4, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over SIMARIA-Das, and further in view of Sundaram et al. (United States Patent Application Publication No. US 2025/0217224 A1), hereinafter “Sundaram”.
Regarding claim 13, SIMARIA-Das discloses the apparatus as in claim 11, but does not expressly disclose wherein the output comprises code for execution to perform the task, and wherein the instruction comprises alternate code for execution. In an analogous art, however, Sundaram discloses wherein an output comprises code for execution to perform a task, and wherein an instruction comprises alternate code for execution (wherein Sundaram discloses providing for systems and techniques that leverage capabilities of language models (LMs), including large language models (LLMs), to provide real-time guidance, instructions, and recommendations to non-expert users for complex systems installation (including assembly), troubleshooting, and/or maintenance. In particular, a user performing one of such (or similar) operations can describe a problem to an LM in a natural language, e.g., “an ethernet card needs a replacement update” and the LM can respond with instructions how to replace the card and run a diagnostic (e.g., hardware, software, and/or firmware) tool, which can be a part of a system’s installation and/or maintenance kit, a built-in tool, and/or the like. The diagnostic tool can perform testing of the system, determine whether the new card is working properly by generating a success code or one or more error codes indicative of a malfunction of the system, e.g., “error 2576 - driver conflict” or simply “error 2576.” Sundaram teaches that the codes can be provided to the user, e.g., via a suitable user interface such as a general computer display or a dedicated diagnostic tool display. More particularly, in instances when one or more error codes are outputted by the diagnostic tool, Sundaram teaches that the user may enter the displayed codes as yet another prompt into the LM, e.g., “system is showing error 2576”. The LM may process the prompt and generate instructions for the user, e.g., “download and run the latest driver update from the system’s support center.” After the user fulfills the instructions, the user may re-run the diagnostic tool, which may identify any remaining system malfunctions and output additional error codes. Sundaram teaches that the user may use such additional error codes in further prompts into the LM to receive further instructions in plain natural language that would be understood by a non-expert. The process may conclude when the diagnostic tool outputs a success code or any other indication that the system is malfunction-free) (Sundaram, paragraph [0015]). SIMARIA-Das and Sundaram are analogous art because they are from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA-Das and Sundaram before him or her, to modify the large language model system 201 of SIMARIA-Das to include the additional limitation of wherein the output comprises code for execution to perform the task, and wherein the instruction comprises alternate code for execution, as disclosed in Sundaram, with reasonable expectation that this would result in a large language model system 201 having the added benefit of leveraging the capabilities of language models (LMs), including LLMs, to provide real-time guidance, instructions, and recommendations to non-expert users for complex systems installation (including assembly), troubleshooting, and/or maintenance, particularly by using a recursive/iterative approach, in a way that provided the most accurate, precise, and/or helpful answers to queries or prompts to the LM, without the users having to wait for expert help (See Sundaram, paragraphs [0015] and [0017]). This method of improving the large language model system 201 of SIMARIA-Das was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Sundaram. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA-Das with Sundaram to obtain the invention as specified in claim 13.
As to claim 14, SIMARIA-Das-Sundaram discloses the apparatus as in claim 13, wherein the alternate code when executed retrieves telemetry from networking equipment in the computer network (wherein the diagnostic tool 160 may include hardware diagnostics 162 capable of testing one, multiple, or all hardware components of target system 180. For example, hardware diagnostics 162 may use any number of sensors capable of measuring any relevant environmental conditions (e.g., temperature, pressure, etc.) and any number of metrics associated with performance of target system 180, such as speed and accuracy of operations of target system 180, network (e.g., Ethernet and/or wireless network) bandwidth, processor speed/utilization, available disk space, energy usage, memory/disk health status, signal strength, signal range, signal quality, response delays, fan speeds, voltages, load and clock speeds, and/or the like. In addition, software diagnostics 164 may use any number of test programs/scripts sensors capable of measuring effectiveness of software execution on testing system 180, e.g., time in queue, network throughput, bit error rate, latency, memory usage, processor speed/utilization, and/or the like) (Sundaram, paragraph [0020]). As discussed and shown above, SIMARIA-Das and Sundaram are analogous art because they are from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA-Das and Sundaram before him or her, to modify the large language model system 201 of SIMARIA-Das to include the additional limitation of wherein the alternate code when executed retrieves telemetry from networking equipment in the computer network, as disclosed in Sundaram, with reasonable expectation that this would result in a large language model system 201 having the added benefit of the ability to diagnose a multitude of different issues, by incorporating a vast amount of different metrics and telemetry data (See Sundaram, paragraph [0020]). This method of improving the large language model system 201 of SIMARIA-Das was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Sundaram. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA-Das with Sundaram to obtain the invention as specified in claim 14.
Claims 3 and 4 include “method” claims that perform limitations substantially as recited in “apparatus” claims 13 and 14, respectively, and do not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, they are rejected under the same rationale.
9. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over SIMARIA-Das, and further in view of TONG et al. (United States Patent Application Publication No. US 2024/0419950 A1), hereinafter “TONG”.
Regarding claim 15, SIMARIA-Das discloses the apparatus as in claim 11, but does not explicitly disclose wherein the apparatus uses a different language model than that of the language model-based troubleshooting agent to determine the level of quality of the output. In an analogous art however, TONG discloses wherein an apparatus uses a different language model than that of a language model-based agent to determine a level of quality of an output (wherein TONG discloses that an AI platform 102 (See FIG. 1A) includes a hallucination detector 126 configured to check the outputs generated by one or more machine learning models (e.g., included in ML agents 120 and/or experts 122) for consistency (e.g., with an input) and truth to mitigate hallucination issues sometimes associated with LLMs. TONG teaches that hallucination detection (e.g., truth checking) may be done by means of dynamic collaborative consensus, in which a group of LLMs (for instance, a group of ML agents 120 using their respective LLMs) deliberate on a given question and answer set (e.g., an input received by a user and/or ML agent or expert and an output generated by an LLM of an ML agent or expert), using a combination of inherent pretrained knowledge and external data sources to determine the consistency and accuracy of the output. An output may be determined to be consistent and/or accurate if consensus is reached. In some examples, consensus may be reached when a sufficient number of LLMs of the hallucination detector 126 agree that the response is consistent and accurate. In some examples, all LLMs of the hallucination detector 126 must agree to reach consensus. External data sources used during the collaborative consensus protocol may include, but are not limited to, digital systems as well as human induced feedback) (TONG, paragraph [0073]). SIMARIA-Das is analogous art because SIMARIA-Das is from the same field of endeavor, namely, troubleshooting network devices using large language models (LLMs) (See SIMARIA, paragraph [0081]), while TONG is analogous art, because TONG is reasonably pertinent to the particular problem with which the inventor was concerned, as TONG is directed to detecting and reducing hallucinated data returned from LLMs (See TONG, paragraphs [0005], [0061] and [0062]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of SIMARIA-Das and TONG before him or her, to modify the large language model system 201 of SIMARIA-Das to include the additional limitation of wherein an apparatus uses a different language model than that of a language model-based agent to determine a level of quality of an output, as disclosed in TONG, with reasonable expectation that this would result in a large language model system 201 having the added benefit of utilizing a dynamic collaborative consensus protocol, in which a group of LLMs (for instance, a group of ML agents 120 using their respective LLMs) deliberated on a given question and answer set (e.g., an input received by a user and/or ML agent or expert and an output generated by an LLM of an ML agent or expert), using a combination of inherent pretrained knowledge and external data sources to determine the consistency and accuracy of the output, and to return more reliable troubleshooting instructions that were free of hallucinated data, thereby ensuring the integrity of the data generated by the LLMs and the overall reliability of the integration process (See TONG, paragraphs [0061] and [0073]). This method of improving the large language model system 201 of SIMARIA-Das was well within the ordinary ability of one of ordinary skill in the art based on the teachings of TONG. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of SIMARIA-Das with TONG to obtain the invention as specified in claim 15. Examiner notes that the LM agents 120 of TONG are not expressly disclosed as LM troubleshooting agents. However, the proposed combination would provide for the LLM troubleshooting system (large language model system 201) of SIMARIA-Das to result in using a combination of inherent pretrained knowledge and external data sources to determine the consistency and accuracy of the output, thus improving the integrity of the data generated by the LLMs and the overall reliability of the integration process in providing more accurate, reliable data that is hallucination-free. That is, SIMARIA-Das does not explicitly disclose a large language model system 201 that uses a combination of inherent pretrained knowledge and external data sources to determine the consistency and accuracy of the output. TONG however teaches using a combination of inherent pretrained knowledge and external data sources to determine the consistency and accuracy of the output, but does not use LLM modules to troubleshoot network devices. All of the component parts are known in SIMARIA-Das and TONG. The only difference is the combination of the “old elements” into a single system by incorporating the feature of a consensus protocol into hallucination detection and mitigation. Thus, it would have been obvious to one having ordinary skill in the art to include the limitation of wherein an apparatus uses a different language model than that of a language model-based agent to determine a level of quality of an output taught by TONG into the large language model system 201 as shown in SIMARIA-Das, since the operation of the LM agents 120 of TONG are in no way dependent on the operation of the LLM system 201 of SIMARIA-Das (as both provide queries to a large language model to produce prompts and responses thereto), and an LM agent 120 such as that disclosed by TONG could be used in combination with a standard troubleshooting LLM system 201 to achieve the predictable results of providing accurate data returned from large language models that is hallucination-free. KSR Int’l v. Teleflex Inc., 127 S. Ct. 1727, 1740-41, 82 USPQ2d 1385, 1396 (2007)
Claim 5 includes a “method” claim that performs limitations substantially as recited in “apparatus” claim 15, and does not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, it is rejected under the same rationale.
Allowable Subject Matter
10. Claims 6, 7, 16 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
11. Applicant’s arguments with respect to claims 1, 11 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
12. Further references of interest are cited on Form PTO-892, which is an attachment to this Office Action. For instance, Reichl (USPGPUB 2023/0417439) discloses a method for troubleshooting a building automation system includes selecting an artificial intelligence tool from a set of available artificial intelligence tools based on the geographic region of the building automation system, ranking troubleshooting options for the building automation system by applying the artificial intelligence tool to data associated with the building automation system, and implementing at least a first troubleshooting option, the first troubleshooting option ranked higher than a remainder of the troubleshooting options by the artificial intelligence tool (See Abstract). Karapantelakis (USPGPUB 2022/0172054) discloses an intermediate network node configured to operate in a communication network. The communication network comprises a requesting node and an executing network node comprising a computational graph model. The intermediate network node is configured with an imitation model. The imitation model is a limited version of the computational graph model, and the imitation model is a model requiring less computational resources to converge when compared to the computational graph model (See Abstract). Qian (USPGPUB 2024/0163313) discloses concepts and techniques directed to software-defined wide-area network (“SD-WAN”) self-service for service assurance. The proposed SD-WAN self-service solution can be used for any policy-driven system that automatically troubleshoots the problems resulting from hybrid SD-WAN network activities, including virtual private network (“VPN”), IP tunnel, IPSec, and security policies. According to one aspect, a method can check network configurations, analyze switch responses, and locate network problems quickly. Moreover, the method can test the functionality of a rules-based troubleshooting software effectively without employing expensive testing equipment and with minimal human intervention (See Abstract).
13. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KOSTAS J. KATSIKIS whose telephone number is (571)270-5434. The examiner can normally be reached Monday-Friday, 9:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian J. Gillis can be reached at 571-272-7952. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KOSTAS J KATSIKIS/Primary Examiner, Art Unit 2441