The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Notice to Applicant
In response to the communication received on 11/18/2024, the following is a Non-Final Office Action for Application No. 18950980.
Status of Claims
Claims 1-20 are pending.
Drawings
The applicant’s drawings submitted on 11/18/2024 are acceptable for examination purposes.
Information Disclosure Statement
The information disclosure statement(s) (IDS) filed 11/19/2024 has been acknowledged. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
As required by M.P.E.P. 201.14(c), acknowledgement is made of applicant’s claim for priority based on: 18950980 filed 11/18/2024 is a Continuation of 17872713 , filed 07/25/2022 ,now U.S. Patent # 12175399 and having 1 RCE-type filing therein; 17872713 is a Continuation of 17146404 , filed 01/11/2021, now abandoned; 17146404 Claims Priority from Provisional Application 62959518 , filed 01/10/2020.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 12175399 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims recite substantially similar limitations as follows: identifying, by a computer comprising a processor and memory, one or more roles in a target environment, wherein the one or more roles correspond to one or more human experts; defining, by the computer, one or more tasks and one or more domains associated with each of the one or more roles; creating, by the computer, a virtual assistant persona for each of the one or more roles; assigning, by the computer, the one or more defined tasks and the one or more defined domains to the one or more virtual assistant personas; and in response to receiving an input from a worker, assigning, by the computer, one of the virtual assistant personas to interact with the worker, wherein a role of the assigned virtual assistant persona is supervisory to the worker.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 as directed to non-statutory subject matter.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. In adhering to the 2019 PEG, Step 1 is directed to determining whether or not the claims fall within a statutory class. Herein, the claims fall within statutory class of process or machine or manufacture. Hence, the claims qualify as potentially eligible subject matter under 35 U.S.C §101. With Step 1 being directed to a statutory category, the 2019 PEG flowchart is directed to Step 2. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font:
A method for creating and assigning one or more virtual assistants, comprising: identifying, by a computer comprising a processor and memory, one or more roles in a target environment, wherein the one or more roles correspond to one or more human experts; defining, by the computer, one or more tasks and one or more domains associated with each of the one or more roles; creating, by the computer, a virtual assistant persona for each of the one or more roles; assigning, by the computer, the one or more defined tasks and the one or more defined domains to the one or more virtual assistant personas; and in response to receiving an input from a worker, assigning, by the computer, one of the virtual assistant personas to interact with the worker, wherein a role of the assigned virtual assistant persona is supervisory to the worker.
[or]
A non-transitory computer-readable storage media embodied with software for creating and assigning one or more virtual assistants, the software when executed using one or more computers is configured to: identifying one or more roles in a target environment, wherein the one or more roles correspond to one or more human experts; defining one or more tasks and one or more domains associated with each of the one or more roles; creating a virtual assistant persona for each of the one or more roles; assigning the one or more defined tasks and the one or more defined domains to the one or more virtual assistant personas; and in response to receiving an input from a worker, assigning one of the virtual assistant personas to interact with the worker, wherein a role of the assigned virtual assistant persona is supervisory to the worker.
[or]
A system for creating and assigning one or more virtual assistants, comprising: a computer comprising a memory and a processor and configured to: identifying one or more roles in a target environment, wherein the one or more roles correspond to one or more human experts; defining one or more tasks and one or more domains associated with each of the one or more roles; creating a virtual assistant persona for each of the one or more roles; assigning the one or more defined tasks and the one or more defined domains to the one or more virtual assistant personas; and in response to receiving an input from a worker, assigning one of the virtual assistant personas to interact with the worker, wherein a role of the assigned virtual assistant persona is supervisory to the worker.
Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion) and/or Certain Methods of Organizing Human Activity including managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules of instructions). Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The computer, processor and/or memory media is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing/transmitting data. This generic computer, processor and/or memory media limitation is no more than mere instructions to apply the exception using a generic computer component. Further, activating, by the virtual assistant, an interactive display device that compensates for the limitations of the natural language input by a computer, processor and/or memory media is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, the 2019 PEG flowchart is directed to Step 2B. Therein, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of: computer, processor and memory media. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, activating, by the virtual assistant, an interactive display device that compensates for the limitations of the natural language input by a computer, processor and/or memory media is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic computer/memory type structure at ¶0026 wherein “one or more interactive display devices 120 comprise one or more processors 202, memory 204, one or more sensors 206, and may include any suitable input device 208, output device 210, fixed or removable computer-readable storage media, or the like”. Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f));
PNG
media_image1.png
18
19
media_image1.png
Greyscale
ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d));
PNG
media_image1.png
18
19
media_image1.png
Greyscale
iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or
PNG
media_image1.png
18
19
media_image1.png
Greyscale
v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine/manufacture for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Erhart et al. (US 20170282362 A1) hereinafter referred to as Erhart in view of Yao et al. (US 20130346339 A1) hereinafter referred to as Yao.
Erhart teaches:
Claim 1. A method for creating and assigning one or more virtual assistants, comprising:
identifying, by a computer comprising a processor and memory, one or more roles in a target environment, wherein the one or more roles correspond to one or more human experts (¶0120 In one embodiment, processor 304 may be a portion of a contact center, robot application store, local device, service provider, and/or other entity or enterprise. User 302 may provide processor 304 with a specific instruction, such as one of instructions 308A-n, requested or identify a task or issue to solve, whereby processor 304 determines the appropriate instruction. ¶0124 Should configured robot 314, or un-configured robot 310 once configured, be determined to be unsuitable or unable to performance the task, processor 304 may attempt to provision a different configured robot 314, dispatch a human agent, or notify user 302 that the task is not able to be performed by a robot and/or seek a different means to perform the task);
defining, by the computer, one or more tasks and one or more domains associated with each of the one or more roles (¶0120 In one embodiment, processor 304 may be a portion of a contact center, robot application store, local device, service provider, and/or other entity or enterprise. User 302 may provide processor 304 with a specific instruction, such as one of instructions 308A-n, requested or identify a task or issue to solve, whereby processor 304 determines the appropriate instruction. ¶0122 User 302 may own or otherwise control unconfigured robot 310. User 302 may install hardware accessory 312 and or instruction C (308C) directly or via request to processor 304. Processor 304 may determine the request requires unconfigured robot 310 to have (e.g., attached, installed, paired, available, in communication with, etc.) hardware accessory 312 to enable configured robot 314 to perform the task);
creating, by the computer, a virtual assistant persona for each of the one or more roles (¶0120 FIG. 3 depicts system 300 in accordance with embodiments of the present disclosure. Configured robot 314 may be configured to perform a task or set of tasks from un-configured robot 310. Transformation of un-configured robot 310 to configured robot 314 is variously embodied. ¶0121 In one embodiment, un-configured robot 310 requires software instructions and a hardware accessory in order to perform a particular task);
assigning, by the computer, the one or more defined tasks and the one or more defined domains to the one or more virtual assistant personas (¶¶0120-0121 Transformation of un-configured robot 310 to configured robot 314 is variously embodied. In one embodiment, user 302, alone or with benefit of an electronic communication device (e.g., smart phone, Internet connected computer, dedicated robot configuring device, etc.) provides processor 304 with an issue or task. In one embodiment, processor 304 may be a portion of a contact center, robot application store, local device, service provider, and/or other entity or enterprise. User 302 may provide processor 304 with a specific instruction, such as one of instructions 308A-n, requested or identify a task or issue to solve, whereby processor 304 determines the appropriate instruction. In one embodiment, un-configured robot 310 requires software instructions and a hardware accessory in order to perform a particular task. User 302 provides the issue or task or specifies the instructions causing processor 304 to access data repository 306 comprising instructions 308. Processor 304 determines that instruction C (308C) is the appropriate instruction. Processor 304 then causes un-configured robot 310 to load instruction set 308C thereby causing robot 310 to become configured robot 314. Configured robot 314 may then be deployed for the performance of the task); and
in response to receiving an input from a worker, assigning, by the computer, one of the virtual assistant personas to interact with the worker, wherein a role of the assigned virtual assistant persona is supervisory to the worker (¶0133 Supervisor robot 502 may monitor a plurality of robots 102 such as to ensure proper coordination of physical activities. For example, robot 102 and a number of peer robots may be attempting to lift four corners of an object uniformly. Supervisor robot 502 may monitor each of robots 102 and the number of peer robots and/or the object being lifted to ensure coordination or take appropriate action if coordination is not evident to the degree expected.).
Although not explicitly taught by Erhart, Yao teaches in the analogous art of configuring process variants for on-boarding customers for information technology (it) outsourcing:
wherein the one or more roles correspond to one or more human experts (¶0062 Organizational perspective involves roles and responsibilities in the process. Functional perspective captures what activities are performed. Behavioral perspective describes the execution sequence of activities. Data perspective records the data flow among activities. Business goals, KPIs, and metrics are available in the goal perspective. For example, the following rule VCR.sub.--1 shows an additional role for subject-matter expertise (ProcessSME) is needed to be responsible for a certain process when dealing with large deals. Rules that reflect other perspectives are formalized in a similar way. ¶0020 The data collection phase 110 may include input from case studies in on-boarding projects and interviews with on-boarding managers and team members. In an example, process guides based on best practices stated in the Information Technology Infrastructure Library (ITIL) and collected process logs of past projects were used (steps 1-2). Deviations in these process logs may be analyzed with the help of domain experts to link these with individual steps in the standard process guides and associate these deviations with their causes (step 3).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the configuring process variants for on-boarding customers for information technology (it) outsourcing of Yao with the command and control of a user-provided robot by a contact center of Erhart for the following reasons:
(1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Erhart ¶0002 teaches that it is desirable to have low-cost, general purpose personal robots;
(2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Erhart Abstract teaches instructions to transform an unconfigured robot, such as a generic robot, into a configured robot operable to perform the task, and Yao Abstract teaches systems and methods of configuring process variants for on-boarding customers for information technology (IT) outsourcing; and
(3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Erhart at least the above cited paragraphs, and Yao at least the inclusively cited paragraphs.
Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the configuring process variants for on-boarding customers for information technology (it) outsourcing of Yao with the command and control of a user-provided robot by a contact center of Erhart. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G).
Erhart teaches:
Claim 2. The method of Claim 1, wherein the one or more roles comprise one or more of: providing information to one or more workers, authorizing one or more activities of one or more workers and performing an activity required by one or more workers to perform one or more tasks (¶0080 In another embodiment, robot 102 may interact with one or more human and/or automated resources. For example, robot 102 may utilize an on-board speaker or display to communicate with a human, such as to request assistance in locating the object of the task. In another embodiment, robot 102 may communicate with an off-site resource (human and/or automated) to access data, instructions, etc. or to provide data, which may include, but is not limited to, images, videos, sound, and/or other data (e.g., serial numbers, settings, display values, etc.). ¶0156 I/O device 230, when embodied as a camera, may see customer 302 gesture to issue commands. Gestured commands, natively, may include, but are not limited to, pointing, looking (e.g., eye position), facial expression, waving, sign language, body position, interacting (e.g., pointing, touching, holding, looking, moving, etc.) with an object other than robot 102, physical activity (e.g., running, walking, sitting, standing, etc.), location/orientation (e.g., present in a location having a meaning different than if present in another location, arm position, leg position, etc.), attire, proximity to another human, physical interaction with another human, identity of a proximate other human, role of a proximate other human, and/or other observable human trait, attribute, or behavior.).
Erhart teaches:
Claim 3. The method of Claim 1, further comprising: transmitting, by the computer, the one or more defined tasks to a workforce management system (¶0121 In one embodiment, un-configured robot 310 requires software instructions and a hardware accessory in order to perform a particular task. User 302 provides the issue or task or specifies the instructions causing processor 304 to access data repository 306 comprising instructions 308. Processor 304 determines that instruction C (308C) is the appropriate instruction. Processor 304 then causes un-configured robot 310 to load instruction set 308C thereby causing robot 310 to become configured robot 314. Configured robot 314 may then be deployed for the performance of the task).
Erhart teaches:
Claim 4. The method of Claim 1, wherein each of the one or more defined domains has an associated group of tasks (¶0120 FIG. 3 depicts system 300 in accordance with embodiments of the present disclosure. Configured robot 314 may be configured to perform a task or set of tasks from un-configured robot 310. Transformation of un-configured robot 310 to configured robot 314 is variously embodied.).
Erhart teaches:
Claim 5. The method of Claim 1, wherein the virtual assistant persona is assigned based on an intent determined from the received input (¶0078 In another embodiment, a portion of robot 102 is owned by a customer and another portion is owned by the contact center. The portion of robot 102 owned by the contact center may be hardware and/or software. For example, a customer may wish to deploy robot 102 to perform a specialized task, such as to repair a washing machine. Robot 102 may require specialized tools to perform such a task. Accordingly, the customer may contact the contact center or other entity and schedule delivery of a component to coordinate with robot 102 and provide the tools necessary to complete the task.).
Erhart teaches:
Claim 6. The method of Claim 1, further comprising: in response to determining that the received input requires an on-duty expert, routing, by the computer, the received input to the on-duty expert (¶0097 In another embodiment, third-party monitoring agent 116 monitors and/or audits robot 102. Third-party monitoring agent 116 may receive signals from robot 102 and/or any other agents (e.g., agents 108, 110, 112, 114), human-submitted observations, other automated systems (e.g., robot delivery/servicing, financial, credit reporting, etc.) to determine if robot 102 is operating within a previously determined protocol. Third-party monitoring agent 116 may determine if a violation of protocol has occurred and signal personnel associated with at least one of service location 104, robot 102, and/or other human or automated agent that such a violation has occurred. In another embodiment, third-party monitoring agent 116 may cause, directly or via other agent (e.g., agents 108, 110, 112, 114), and/or personnel to perform corrective action to cause robot 102 to return to operating within the previously determined protocol.).
Erhart teaches:
Claim 7. The method of Claim 6, wherein the determining that the received input requires an on-duty expert is based on one or more policies and one or more rules (¶0096 In another embodiment, privacy agent 114 may monitor robot 102, and components thereof, to ensure compliance with a privacy policy. In one embodiment, privacy agent 114 detects and reports compliance, or lack of compliance, with a privacy policy. In another embodiment, privacy agent 114 enforces the privacy policy. For example, a robot 102 may be moving throughout service location 104 to access the location for a task. Persons, photographs, and other images may be encountered. Accordingly, privacy agent 114 may cause the camera(s) of robot 102 to degrade (e.g., lower resolution, fewer frames, altered contrast or other image setting, introduced noise, redaction, etc.) until such time as the cameras are required to perform the task.).
As per claims 8-14 and 15-20, the non-transitory computer-readable storage media and system tracks the method of claims 1-7 and 1-6, respectively, resulting in substantially similar limitations. The same cited prior art and rationale of claims 1-7 and 1-6 are applied to claims 8-14 and 15-20, respectively. Erhart discloses that the embodiment may be found as a system and non-transitory computer-readable storage media (Fig. 1 and ¶0030).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20200184540 A1
D'SOUZA; Shaun Cyprian et al.
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING BASED CONVERSATIONAL AGENT
US 20200106881 A1
Beaver; Ian
PARTIAL AUTOMATION OF TEXT CHAT CONVERSATIONS
US 20200104777 A1
BOUHINI; Chahrazed et al.
ADAPTIVE ARTIFICIAL INTELLIGENCE FOR USER TRAINING AND TASK MANAGEMENT
WO 2019211713 A1
SHIRAZIPOUR MERAL et al.
AUTOMATED AUGMENTED REALITY RENDERING PLATFORM FOR PROVIDING REMOTE EXPERT ASSISTANCE
US 20190236516 A1
PONNUSAMY; Rajkumar
METHOD FOR DETERMINING AMOUNT OF TIME SPENT ON A TASK AND ESTIMATING AMOUNT OF TIME REQUIRED TO COMPLETE THE TASK
US 9912810 B2
Segre; Paul et al.
System and method for chat automation
US 20180060789 A1
Stefik; Mark J. et al.
System And Method For Providing Conditional Autonomous Messaging To Parking Enforcement Officers With The Aid Of A Digital Computer
WO 2017192684 A1
TAMBLYN ERIC et al.
SYSTEM AND METHOD FOR MANAGING AND TRANSITIONING AUTOMATED CHAT CONVERSATIONS
US 20160162566 A1
Hanis; Thomas T. et al.
Model Navigation Constrained by Classification
AU 2014236686 A1
BREAZEAL CYNTHIA
Apparatus and methods for providing a persistent companion device
WO 2010138962 A2
COULTER R et al.
Method of controlling patient care logistics in clinical environment, involves determining amendment to previously determined unique priority sorted list or new unique priority sorted list based on change in preset criterions
NPL
McKinsey & Company
A Future That Works: Automation, Employment, and Productivity
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURTIS GILLS whose telephone number is (571)270-3315. The examiner can normally be reached on M-F 8-5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on 5712723955. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KURTIS GILLS/Primary Examiner, Art Unit 3624