DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
This communication is in response to the Request for Continued Examination (RCE) filed 3/16/26. Claims 1, 4, 11, 14, 21, and 24 have been amended. Claims 1-30 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/16/26 has been entered.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-30 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9, 11-19, 21-24, and 26-29 of copending Application No. 17/874,693 in view of Woo et al. (US 2019/0066680 A1) and further in view of Wright et al. (US 2018/0165723 A1). For example, note that the limitations of claim 1 of this application are substantially similar to the limitations of claim 1 of 17/874,693.
Claim 1 of ‘693 lacks the limitations of “wherein processing at least a portion of the diction to identify the task to be performed within the medical management system includes: processing at least a portion of the diction to identify one or more task- indicative trigger words; and processing at least a portion of the diction to identify one or more task- indicative conversational structures.“ However, Woo discloses wherein processing at least a portion of the speech to identify the task to be performed within the management system includes: processing at least a portion of the speech to identify one or more task-indicative trigger words (para. 66, 69, and 71 of Woo); and processing at least a portion of the speech to identify one or more task-indicative conversational structures (para. 70, 71, 122, and 159 of Woo). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to include the aforementioned features in claim 1 of ‘693 in order to improve user's convenience by continuously providing a voice recognition service without the need to continuously make a voice command (para. 11 of Woo). Claim 1 of ‘693 also lacks the limitations of “wherein the one or more task-indicative trigger words and the one or more task-indicative conversational structures are generated using an artificial intelligence (AI) process that processes initial seed data associated with task- indicative trigger words and task-indicative conversational structures.” However, Wright discloses wherein the one or more task-indicative trigger words and the one or more task-indicative conversational structures are generated using an artificial intelligence (AI) process that processes initial seed data associated with task- indicative trigger words and task-indicative conversational structures (para. 57 of Wright). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to include the aforementioned features in claim 1 of ‘693 in order to learn by examining patterns (para. 22 of Wright).
This is a provisional nonstatutory double patenting rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-9, 11, 13-19, 21, and 23-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gallopyn et al. (US 2014/0249830 A1) in view of Trehan (US 2021/0334473 A1), in view of Woo et al. (US 2019/0066680 A1), and further in view of Wright et al. (US 2018/0165723 A1).
(A) Referring to claim 1, Gallopyn discloses a computer-implemented method, executed on a computing device, comprising (para. 25 & 26 of Gallopyn):
monitoring speech of a medical specialist using a virtual assistant (para. 38-40 & 43 of Gallopyn; virtual medical assistant application 110 may receive input (e.g., free-form instruction from a medical professional) in the form of speech, and ASR component 130 may be utilized to identify the content of the speech (e.g., recognize the constituent words in the speech input). In particular, a medical professional may speak free-form instruction to the host device (e.g., mobile device 110) and/or an interface component capable of accessing, directly or indirectly, the host device.)
processing at least a portion of the speech to identify a task to be performed within a medical management system (para. 47-50 of Gallopyn; A knowledge representation model utilized by NLU/CLU component 140 may also include (or alternatively include) one or more rule-based models that provide a set of rules as to how to map words or phrases in free-form instruction to corresponding medical tasks and/or that map words or phrases in free-form instruction to parameters of an identified medical task. For example, NLU/CLU component 140 may include a rule-based natural language processing system to extract medical facts, link facts to concepts or otherwise assist in identifying one or more medical tasks specified in free-form instruction received from a medical professional. Some rules may be quite specific, so that a firing of the rule indicates with high probability that the determination expressed thereby is accurate (e.g., detection of the word "order" in combination with identifying the name of a medication in the free-form instruction may indicate with high probability that the medical professional would like the virtual medical assistant to order the medication).);
identifying the task from the at least a portion of the speech (para. 48 of Gallopyn; NLU/CLU component 140 may include a rule-based natural language processing system to extract medical facts, link facts to concepts or otherwise assist in identifying one or more medical tasks specified in free-form instruction received from a medical professional. In a rule-based system, a linguist and/or other individual may create a plurality of rules that can specify what words or combinations or words evidence that free-form speech specifies a particular medical task (e.g., an instruction to order an orderable item).); and
in response to identifying the task, effectuating the task on the medical management system, wherein effectuating the task on the medical management system includes commandeering a local user interface, normally used by the medical specialist, of the medical management system to effectuate the task on the medical management system (see Fig. 2, para. 65, 14, 17, 18, 31, 1, and 79-82 of Gallopyn; the virtual medical assistant may be configured to access order fulfillment system 190 to facilitate the dispatch of orders received by the virtual medical assistant from the medical professional. To facilitate electronic ordering, the virtual medical assistant may be configured to utilize any industry standard electronic interfaces to connect with and communicate with order fulfillment system 190, or may use any specialized or proprietary interfaces or formats needed to utilize one or more order fulfillment systems. Virtual medical assistant application 110 is configured to provide a graphical user interface (GUI) on display 105 of mobile device 101 that presents to the medical professional the mechanisms by which the medical professional may provide input to the virtual medical assistant. The virtual medical assistant may interact with the medical professional in real-time to assist the medical professional in performing medical tasks and/or in performing medical tasks on behalf of the medical professional.), wherein commandeering the local user interface of the medical management system includes displaying remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (see Figures 1 & 4A-4H and para. 31-35, 39, 108-111 of Gallopyn; FIG. 4A illustrates an exemplary introductory presentation, for example, that may be presented when the medical professional activates the virtual medical assistant (e.g., by launching the virtual medical assistant application on the mobile device). In this example, the virtual medical assistant application displays dialog (and may also audibly render the dialog) indicating that virtual medical assistant is ready to assist and provides guidance to the medical professional by inviting the medical professional to tap the microphone icon and tell the virtual medical assistant what the medical professional would like done. Initially, a medical professional may be unfamiliar with functionality provided by the virtual medical assistant and the virtual medical assistant application may be present a menu of exemplary tasks that the virtual medical assistant can assist the medical professional with or perform on behalf of the medical professional. To this end, the medical professional may say "What can you do for me?" and in response the virtual medical assistant application may present a number of example tasks the virtual medical assistant can perform by providing representative free-form instruction suitable to instruct the virtual medical assistant to perform the associated task, or begin the process of performing such a task.).
Gallopyn does not expressly disclose that the speech includes diction; wherein processing at least a portion of the diction to identify the task to be performed within the medical management system includes: processing at least the portion of the diction to identify one or more task-indicative trigger words; and processing at least the portion of the diction to identify one or more task-indicative conversational structures; wherein the one or more task-indicative trigger words and the one or more task-indicative conversational structures are generated using an artificial intelligence (AI) process that processes initial seed data associated with task- indicative trigger words and task-indicative conversational structures.
Trehan discloses various attributes associated with user speech such as diction (see paragraph 22 of Trehan).
Woo discloses wherein processing at least a portion of the speech to identify the task to be performed within the management system includes: processing at least the portion of the speech to identify one or more task-indicative trigger words (para. 66, 69, and 71 of Woo; the voice recognition module 213 may generate (or extract) text information by processing the received voice, transmit the generated text information to the voice-processing server, and receive task information corresponding to the text information from the voice-processing server. The task information may refer to a function (or a service) that the electronic device 200 should perform in response to a user's speech. The voice recognition module 213 may transfer at least one piece of the voice information, the text information, and the task information to the wake word control module 215, the speaker identification module 217, and the condition determination module 219. The wake word control module 215 may register a user wake word from text information related to the processed task. The wake word control module 215 may determine a keyword or a word, which can be registered as the user wake word, by analyzing the voice information. The wake word control module 215 may determine the words “today” and “weather” in the sentence “What's the weather like today?” as the user wake words.); and processing at least the portion of the speech to identify one or more task-indicative conversational structures (para. 70, 71, 122, and 159 of Woo; The wake word control module 215 may determine a keyword or a word, which can be registered as the user wake word, by analyzing the voice information. The wake word control module 215 may determine the words “today” and “weather” in the sentence “What's the weather like today?” as the user wake words. When the user's speech “I want to eat steak for lunch, so is there a restaurant around the office you recommend?” is received, the processor may process voice recognition for the user's speech and output a message such as “A steak restaurant around the office is XXX” (e.g., display the message on the display or output the message through the speaker).).
Wright discloses wherein the one or more task-indicative trigger words and the one or more task-indicative conversational structures are generated using an artificial intelligence (AI) process that processes initial seed data associated with task- indicative trigger words and task-indicative conversational structures (para. 57 of Wright; A randomly selected subset of the Truth Set data (referred to as the Seed Set) is used to generate one or more statistical or neural network based Conversational Models. For example, a statistical model may be created in which Category Tags in the truth set are correlated with the appearance of certain keywords or phrases in the corresponding natural language data with a weight assigned to each keyword or phrase. For example, the system may see that 100% of all conversations that include the word “cancel” and 50% of conversations that include the word “dissatisfied” have been tagged with the topic tag “Cancel my subscription”, and accordingly weight the word “cancel” higher than the word “dissatisfied” when deciding if a conversation should be tagged with the “Cancel my subscription” tag. In another example, the system may see that a particular phrase such as “Where is my order” is in 50% of all the conversations that have been tagged with the topic “Order Status” while only occurring in 10% of all other conversations. The system may then choose to weight a conversation with the “Where is my order” phrase higher for the “Order Status” topic than other topics. The model may use any number of statistical or other “machine learning” techniques (such as neural networks) in order to apply itself to a broader data set. In one embodiment, the application of the model to a given item results in the generation of a set of scores, each score corresponding to a different topic. In some embodiments, the system may generate different Truth Sets for different Communication Platforms. Different Truth Sets may be used, for example, to accommodate the use of different patterns of natural language in different channels (e.g., conversations patterns found in email as compared to SMS messages).).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Gallopyn’s virtual medical assistant to include the aforementioned feature of Trehan, Woo, and Wright. The motivation for doing so would have been to analyze various attributes associated with the user speech and proactively identify assistance requirements (para. 54 of Trehan), to improve user's convenience by continuously providing a voice recognition service without the need to continuously make a voice command (para. 11 of Woo), and to learn by examining patterns (para. 22 of Wright).
(B) Referring to claim 3, Gallopyn discloses wherein monitoring the speech of the medical specialist using the virtual assistant includes monitoring the speech of an ordering specialist using the virtual assistant (para. 38 & 90 of Gallopyn).
Gallopyn does not expressly disclose that the speech includes diction.
Trehan discloses various attributes associated with user speech such as diction (see paragraph 22 of Trehan).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Gallopyn’s virtual medical assistant to include the aforementioned feature of Trehan. The motivation for doing so would have been to analyze various attributes associated with the user speech and proactively identify assistance requirements (para. 54 of Trehan).
Insofar as the claim recites “one or more of,” it is immaterial whether or not the other elements are disclosed.
(C) Referring to claim 4, Gallopyn discloses wherein processing at least a portion of the speech to identify the task to be performed within the medical management system further includes processing at least the portion of the speech using natural language processing (para. 36, 46, & 48 of Gallopyn).
Gallopyn does not expressly disclose that the speech includes diction.
Trehan discloses various attributes associated with user speech such as diction (see paragraph 22 of Trehan).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Gallopyn’s virtual medical assistant to include the aforementioned feature of Trehan. The motivation for doing so would have been to analyze various attributes associated with the user speech and proactively identify assistance requirements (para. 54 of Trehan).
(D) Referring to claim 5, Gallopyn does not expressly disclose wherein processing at least a portion of the diction to identify the task to be performed within the medical management system includes: processing at least a portion of the diction on a cloud-based computing resource to identify the task to be performed within the medical management system.
Trehan discloses wherein processing at least a portion of the diction to identify the task to be performed within the medical management system includes: processing at least a portion of the diction on a cloud-based computing resource to identify the task to be performed within the medical management system (para. 22, 23, 69, 43, 44, 74, and 54 of Trehan).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Trehan within Gallopyn. The motivation for doing so would have been to analyze various attributes associated with the user speech and proactively identify assistance requirements (para. 54 of Trehan).
(E) Referring to claim 6, Gallopyn discloses wherein the medical management system includes one or more of: a medical office management system; a medical office billing system; and a pharmacy management system (para. 64 & 65 of Gallopyn).
Insofar as the claim recites “one or more of,” it is immaterial whether or not all the elements are disclosed.
(F) Referring to claims 7, 17, and 27, Gallopyn discloses wherein effectuating the task on the medical management system includes: parsing the task into a plurality of subtasks; and effectuating the plurality of subtasks on the medical management system (para. 99 and 89-92 of Gallopyn).
(G) Referring to claims 8, 18, and 28, Gallopyn discloses wherein effectuating the task on the medical management system includes: accessing the medical management system using an application program interface of the medical management system to effectuate the task on the medical management system (para. 107, 86, and 87 of Gallopyn).
(H) Referring to claim 9, Gallopyn discloses further comprising: interfacing the virtual assistant with the medical management system (para. 64 & 65 of Gallopyn).
(I) Claim 11 differs from Claim 1 by reciting “A computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising:” (para. 3 of Gallopyn) and Claim 21 differs from Claim 1 by reciting “A computing system including: a memory; and a processor configured to perform operations comprising:” (para. 125 & 126 of Gallopyn).
The remainder of claims 11 and 21 repeat the same limitations as claim 1, and are therefore rejected for the same reasons given above.
(J) Claims 13-16, 19, and 23-26, and 29 repeat the same limitations as claims 3-6 & 9, and are therefore rejected for the same reasons given above.
Claim(s) 2, 10, 12, 20, 22, and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gallopyn et al. (US 2014/0249830 A1) in view of Trehan (US 2021/0334473 A1), in view of Woo et al. (US 2019/0066680 A1), in view of Wright et al. (US 2018/0165723 A1), and further in view of Jessen (US 2018/0285595 A1).
(A) Referring to claims 2, 12, and 22, Gallopyn, Trehan, Woo, and Wright do not disclose wherein monitoring the diction of the medical specialist using a virtual assistant includes: monitoring the diction of the medical specialist using the virtual assistant to listen for an utterance of a wake-up word.
Jessen discloses wherein monitoring the diction of the medical specialist using a virtual assistant includes: monitoring the diction of the medical specialist using the virtual assistant to listen for an utterance of a wake-up word (para. 13-16, 26, & 58 of Jessen).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Jessen within Gallopyn, Trehan, Woo, and Wright. The motivation for doing so would have been for hands-free initiation of an interactive communication session (para. 95 & 26 of Jessen).
(B) Referring to claims 10, 20, and 30, Gallopyn discloses wherein interfacing the virtual assistant with the medical management system includes: enabling functionality on the virtual assistant to effectuate communication between the virtual assistant and the medical management system (Fig. 1 and para. 24-26 & 37 of Gallopyn).
Gallopyn, Trehan, Woo, and Wright do not expressly disclose that the communication is cloud-based. However, cloud-based communication is old and well-known, as evidenced by Jessen (see para. 24 & 113-115 of Jessen).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Jessen within Gallopyn, Trehan, Woo, and Wright. The motivation for doing so would have been so that functionality may implemented through use of a distributed system (para. 113 of Jessen).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 11, and 21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENA NAJARIAN whose telephone number is (571)272-7072. The examiner can normally be reached Monday - Friday 9:30 am-6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LENA NAJARIAN/Primary Examiner, Art Unit 3687