DETAILED ACTION
This office action is in response to Applicant’s submission filed on 6/10/2024. Claims 1-51 are pending in the application of which Claims 1, 18, and 19 are independent and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement(s)(IDS) submitted on 7/4/2024, 8/16/2024, 9/20/2024, 10/28/2024, 1/2/2025, 1/31/2025, 2/20/2025, 3/18/2025, 4/4/2025, 5/14/2025, 6/23/2025, 7/1/2025, 9/10/2025, 9/30/2025, 1/2/2026, and 1/21/2026 have been considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Note: Mapping below is demonstrating a nonstatutory double patenting of the instant application against issued patent of US11769497, however, issued patent of US12033636, and US11837232 have similar claims as the instant application, as such a timely filed terminal disclaimer for these issued patents will be required to overcome the nonstatutory double patenting rejection.
Claims 1- 51 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 51 of U.S. Patent No. 11769497. Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variants of one another. Note instant claims (1-17), (18, 20-35), (19, 36-51) are rejected against (1-17, 18-34, 35-51) respectively.
Please see the prior art rejections for claim mappings and motivation to combine.
Claim
Instant Application 18739167
Parent: Issued patent US11769497B2
1
A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first user device with a display, cause the first user device to:
1
A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first user device with a display, cause the first user device to:
1.aa
initiate a video communication session between the first user device and at least a second user device;
1.a
during a video communication session between the first user device and at least a second user device:
1.bb
receive a first user input;
1.b
receive a first user input;
1.cc
obtain a first digital assistant response based on the first user input;
1.c
obtain a first digital assistant response based on the first user input;
1.dd
provide, to the second user device, the first digital assistant response and context information associated with the first user input;
1.d
provide, to the second user device, the first digital assistant response and context information associated with the first user input;
1.ee
output the first digital assistant response;
1.e
output the first digital assistant response;
1.ff
receive context information associated with a second user input, wherein the second user input is received at the second user device;
1.f
receive a second digital assistant response and context information associated with a second user input,
1.gg
obtain a second digital assistant response, wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and
1.g
wherein the second user input is received at the second user device, and wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and
1.hh
output the second digital assistant response.
1.h
output the second digital assistant response.
2
The non-transitory computer-readable storage medium of claim 1, wherein the context information associated with the first user input comprises
2
The non-transitory computer-readable storage medium of claim 1, wherein the context information associated with the first user input comprises
2.aa
dialog history information associated with a user of the first user device during the video communication session.
2.a
a dialog history between a digital assistant of the first user device and a user of the first user device during the video communication session.
3
The non-transitory computer-readable storage medium of claim 1, wherein the context information associated with the first user input comprises
3
The non-transitory computer-readable storage medium of claim 1, wherein the context information associated with the first user input comprises
3.aa
data corresponding to a current location of the first user device when the first user device received the first user input.
3.a
data corresponding to a current location of the first user device when the first user device received the first user input.
4
The non-transitory computer-readable storage medium of claim 1, wherein the first digital assistant response comprises at least one of:
4
The method of Claim 3,
4.aa
a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input.
4.a
a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input.
5
The method of Claim 1,
5
The non-transitory computer-readable storage medium of claim 1, wherein obtaining the first digital assistant response comprises:
5.aa
performing one or more tasks based on the first user input; and
5.a
performing one or more tasks based on the first user input; and
5.bb
determining the first digital assistant response based on results of the performance of the one or more tasks.
5.b
determining the first digital assistant response based on results of the performance of the one or more tasks.
6
The non-transitory computer-readable storage medium of claim 1,
6
The non-transitory computer-readable storage medium of claim 1,
6.aa
wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises
6.a
wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises
6.bb
transmitting the first digital assistant response and the context information associated with the first user input to the second user device.
6.b
transmitting the first digital assistant response and the context information associated with the first user input to the second user device.
7
The non-transitory computer-readable storage medium of claim 6,
7
The non-transitory computer-readable storage medium of claim 6,
7.aa
wherein the first user device receives the context information associated with the second user input from the second user device.
7.a
wherein the first user device receives the second digital assistant response and the context information associated with the second user input from the second user device.
8
The non-transitory computer-readable storage medium of claim 6, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
8
The non-transitory computer-readable storage medium of claim 6, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
8.aa
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmit the first user input to the second user device using a first audio stream between the first user device and the second user device.
8.a
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmit the first user input to the second user device using a first audio stream between the first user device and the second user device.
9
The non-transitory computer-readable storage medium of claim 8,
9
The non-transitory computer-readable storage medium of claim 8,
9.aa
wherein the first digital assistant response is transmitted to the second user device using the first audio stream.
9.a
wherein the first digital assistant response is transmitted to the second user device using the first audio stream.
10
The non-transitory computer-readable storage medium of claim 6, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
10
The non-transitory computer-readable storage medium of claim 6, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
10.aa
prior to obtaining the second digital assistant response: receive the second user input; and output the second user input.
10.a
prior to receiving the second digital assistant response: receive the second user input; and output the second user input.
11
The non-transitory computer-readable storage medium of claim 10,
11
The non-transitory computer-readable storage medium of claim 10,
11.aa
wherein the first user device receives the second user input using a first audio stream between the first user device and the second user device.
11.a
wherein the first user device receives the second user input and the second digital assistant response using a first audio stream between the first user device and the second user device.
12
The non-transitory computer-readable storage medium of claim 1,
12
The non-transitory computer-readable storage medium of claim 1,
12.aa
wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input.
12.a
wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input.
13
The non-transitory computer-readable storage medium of claim 1,
13
The non-transitory computer-readable storage medium of claim 1,
13.aa
wherein the first digital assistant response and the second digital assistant response are also output by the second user device.
13.a
wherein the first digital assistant response and the second digital assistant response are also output by the second user device.
14
The non-transitory computer-readable storage medium of claim 13,
14
The non-transitory computer-readable storage medium of claim 13,
14.aa
wherein the second user device outputs the first digital assistant response prior to obtaining the second user input, and
14.a
wherein the second user device outputs the first digital assistant response prior to receiving the second user input, and
14.bb
wherein the second user device outputs the second digital assistant response after the first user device obtains the second digital assistant response.
14.b
wherein the second user device outputs the second digital assistant response after the first user device receives the second digital assistant response.
15
The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
15
The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
15.aa
prior to obtaining the second digital assistant response: receive an indication that a digital assistant of the second user device has been invoked; receive a third user input;
15.a
prior to receiving the second digital assistant response: receive an indication that a digital assistant of the second user device has been invoked; receive a third user input;
15.bb
determine, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and
15.b
determine, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and
15.cc
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgo obtaining a third digital assistant response based on the third user input.
15.c
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgo obtaining a third digital assistant response based on the third user input.
16
The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
16
The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
16.aa
after outputting the second digital assistant response: receive a fourth user input;
16.a
after outputting the second digital assistant response: receive a fourth user input;
16.bb
in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receive a fifth user input;
16.b
in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receive a fifth user input;
16.cc
obtain a fourth digital assistant response based on the fifth user input;
16.c
obtain a fourth digital assistant response based on the fifth user input;
16.dd
forgo providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and
16.d
forgo providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and
16.ee
output the fourth digital assistant response.
16.e
output the fourth digital assistant response.
17
The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
The non-transitory computer-readable storage medium of claim 1, wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the first user device to:
17.aa
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determine whether the context information includes private information stored on the first user device; and
1.d
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determine whether the context information includes private information stored on the first user device; and
17.bb
in accordance with a determination that the context information includes private information stored on the first user device: remove at least a portion of the private information from the context information; and
in accordance with a determination that the context information includes private information stored on the first user device: remove at least a portion of the private information from the context information; and
17.cc
provide the first digital assistant response and a remaining context information associated with the first user input to the second user device.
provide the first digital assistant response and the remaining context information associated with the first user input to the second user device.
18
A method, comprising:
18
A method, comprising:
18.aa
initiating a video communication session between at least two user devices, and at a first user device of the at least two user devices;
18.a
during a video communication session between at least two user devices, and at a first user device of the at least two user devices:
18.bb
receiving a first user input;
18.b
receiving a first user input;
18.cc
obtaining a first digital assistant response based on the first user input;
18.c
obtaining a first digital assistant response based on the first user input;
18.dd
providing, to a second user device of the at least two user devices, the first digital assistant response and context information associated with the first user input;
18.d
providing, to a second user device of the at least two user devices, the first digital assistant response and context information associated with the first user input;
18.ee
outputting the first digital assistant response;
18.e
outputting the first digital assistant response;
18.ff
receiving context information associated with a second user input, wherein the second user input is received at the second user device;
18.f
receiving a second digital assistant response and context information associated with a second user input,
18.gg
obtaining a second digital assistant response, wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and
18.g
wherein the second user input is received at the second user device, and wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and
18.hh
outputting the second digital assistant response.
18.h
outputting the second digital assistant response.
20
The method of claim 18, wherein the context information associated with the first user input comprises
19
The method of claim 18, wherein the context information associated with the first user input comprises
20.aa
dialog history information associated with a user of the first user device during the video communication session.
19.a
a dialog history between a digital assistant of the first user device and a user of the first user device during the video communication session.
21
The method of claim 18, wherein the context information associated with the first user input comprises
20
The method of claim 18, wherein the context information associated with the first user input comprises
21.aa
data corresponding to a current location of the first user device when the first user device received the first user input.
20.a
data corresponding to a current location of the first user device when the first user device received the first user input.
22
The method of claim 18, wherein the first digital assistant response comprises at least one of:
21
The method of claim 18, wherein the first digital assistant response comprises at least one of:
22.aa
a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input.
21.a
a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input.
23
The method of claim 18, wherein obtaining the first digital assistant response comprises:
22
The method of claim 18, wherein obtaining the first digital assistant response comprises:
23.aa
performing one or more tasks based on the first user input; and
22.a
performing one or more tasks based on the first user input; and
23.bb
determining the first digital assistant response based on results of the performance of the one or more tasks.
22.b
determining the first digital assistant response based on results of the performance of the one or more tasks.
24
The method of claim 18, wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises
23
The method of claim 18, wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises
24.aa
transmitting the first digital assistant response and the context information associated with the first user input to the second user device.
23.a
transmitting the first digital assistant response and the context information associated with the first user input to the second user device.
25
The method of claim 24,
24
The method of claim 23,
25.aa
wherein the first user device receives the context information associated with the second user input from the second user device.
24.a
wherein the first user device receives the second digital assistant response and the context information associated with the second user input from the second user device.
26
The method of claim 24, further comprising:
25
The method of claim 23, further comprising:
26.aa
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmitting the first user input to the second user device using a first audio stream between the first user device and the second user device.
25.a
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmitting the first user input to the second user device using a first audio stream between the first user device and the second user device.
27
The method of claim 26,
26
The method of claim 25,
27.aa
wherein the first digital assistant response is transmitted to the second user device using the first audio stream.
26.a
wherein the first digital assistant response is transmitted to the second user device using the first audio stream.
28
The method of claim 24, further comprising:
27
The method of claim 52, further comprising:
28.aa
prior to obtaining the second digital assistant response: receiving the second user input; and
27.a
prior to receiving the second digital assistant response: receiving the second user input; and
28.bb
outputting the second user input.
27.b
outputting the second user input.
29
The method of claim 28,
28
The method of claim 27,
29.a
wherein the first user device receives the second user input using a first audio stream between the first user device and the second user device.
28.a
wherein the first user device receives the second user input and the second digital assistant response using a first audio stream between the first user device and the second user device.
30
The method of claim 18,
29
The method of claim 18,
30.aa
wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input.
29.a
wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input.
31
The method of claim 18,
30
The method of claim 18,
31.aa
wherein the first digital assistant response and the second digital assistant response are also output by the second user device.
30.a
wherein the first digital assistant response and the second digital assistant response are also output by the second user device.
32
The method of claim 31,
31
The method of claim 30,
32.aa
wherein the second user device outputs the first digital assistant response prior to obtaining the second user input, and
31.a
wherein the second user device outputs the first digital assistant response prior to receiving the second user input, and
32.bb
wherein the second user device outputs the second digital assistant response after the first user device obtains the second digital assistant response.
31.b
wherein the second user device outputs the second digital assistant response after the first user device receives the second digital assistant response.
33
The method of claim 18, further comprising:
32
The method of claim 18, further comprising:
33.aa
prior to obtaining the second digital assistant response: receiving an indication that a digital assistant of the second user device has been invoked;
32.a
prior to receiving the second digital assistant response: receiving an indication that a digital assistant of the second user device has been invoked;
33.bb
receiving a third user input;
32.b
receiving a third user input;
33.cc
determining, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and
32.c
determining, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and
33.dd
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgoing obtaining a third digital assistant response based on the third user input.
32.d
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgoing obtaining a third digital assistant response based on the third user input.
34
The method of claim 18, further comprising:
33
The method of claim 18, further comprising:
34.aa
after outputting the second digital assistant response: receiving a fourth user input;
33.a
after outputting the second digital assistant response: receiving a fourth user input;
34.bb
in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receiving a fifth user input;
33.b
in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receiving a fifth user input;
34.cc
obtaining a fourth digital assistant response based on the fifth user input;
33.c
obtaining a fourth digital assistant response based on the fifth user input;
34.dd
forgoing providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and
33.d
forgoing providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and
34.ee
outputting the fourth digital assistant response.
33.e
outputting the fourth digital assistant response.
35
The method of claim 18, further comprising:
34
The method of claim 18, further comprising:
35.aa
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determining whether the context information includes private information stored on the first user device; and
34.a
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determining whether the context information includes private information stored on the first user device; and
35.bb
in accordance with a determination that the context information includes private information stored on the first user device: removing at least a portion of the private information from the context information; and
34.b
in accordance with a determination that the context information includes private information stored on the first user device: removing at least a portion of the private information from the context information; and
35.cc
providing the first digital assistant response and a remaining context information associated with the first user input to the second user device.
34.c
providing the first digital assistant response and the remaining context information associated with the first user input to the second user device.
19
A first user device, comprising: a display; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, wherein the one or more programs include instructions for:
35
A first user device, comprising: a display; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, wherein the one or more programs include instructions for:
19.aa
initiating a video communication session between the first user device and at least a second user device;
35.a
during a video communication session between the first user device and at least a second user device:
19.bb
receiving a first user input;
35.b
receiving a first user input;
19.cc
obtaining a first digital assistant response based on the first user input;
35.c
obtaining a first digital assistant response based on the first user input;
19.dd
providing, to the second user device, the first digital assistant response and context information associated with the first user input;
35.d
providing, to the second user device, the first digital assistant response and context information associated with the first user input;
19.ee
outputting the first digital assistant response;
35.e
outputting the first digital assistant response;
19.ff
receiving context information associated with a second user input, wherein the second user input is received at the second user device;
35.g
receiving a second digital assistant response and context information associated with a second user input,
19.gg
obtaining a second digital assistance response, wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and
35.g
wherein the second user input is received at the second user device, and wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and
19.hh
outputting the second digital assistant response.
35.h
outputting the second digital assistant response.
36
The first user device of claim 19, wherein the context information associated with the first user input comprises
36
The first user device of claim 35, wherein the context information associated with the first user input comprises
36.aa
dialog history information associated with a user of the first user device during the video communication session.
36.a
a dialog history between a digital assistant of the first user device and a user of the first user device during the video communication session.
37
The first user device of claim 19, wherein the context information associated with the first user input comprises
37
The first user device of claim 35, wherein the context information associated with the first user input comprises
37.aa
data corresponding to a current location of the first user device when the first user device received the first user input.
37.a
data corresponding to a current location of the first user device when the first user device received the first user input.
38
The first user device of claim 19, wherein the first digital assistant response comprises at least one of:
38
The first user device of claim 35, wherein the first digital assistant response comprises at least one of:
38.aa
a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input.
38.a
a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input.
39
The first user device of claim 19, wherein obtaining the first digital assistant response comprises:
39
The first user device of claim 35, wherein obtaining the first digital assistant response comprises:
39.aa
performing one or more tasks based on the first user input; and
39.a
performing one or more tasks based on the first user input; and
39.bb
determining the first digital assistant response based on results of the performance of the one or more tasks.
39.b
determining the first digital assistant response based on results of the performance of the one or more tasks.
40
The first user device of claim 19, wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises
40
The first user device of claim 35, wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises
40.aa
transmitting the first digital assistant response and the context information associated with the first user input to the second user device.
40.a
transmitting the first digital assistant response and the context information associated with the first user input to the second user device.
41
The first user device of claim 40,
41
The first user device of claim 40,
41.aa
wherein the first user device receives the context information associated with the second user input from the second user device.
41.a
wherein the first user device receives the second digital assistant response and the context information associated with the second user input from the second user device.
42
The first user device of claim 40, wherein the one or more programs further include instructions for:
42
The first user device of claim 40, wherein the one or more programs further include instructions for:
42.aa
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmitting the first user input to the second user device using a first audio stream between the first user device and the second user device.
42.a
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmitting the first user input to the second user device using a first audio stream between the first user device and the second user device.
43
The first user device of claim 42,
43
The first user device of claim 42,
43.aa
wherein the first digital assistant response is transmitted to the second user device using the first audio stream.
43.a
wherein the first digital assistant response is transmitted to the second user device using the first audio stream.
44
The first user device of claim 40, wherein the one or more programs further include instructions for:
44
The first user device of claim 40, wherein the one or more programs further include instructions for:
44.aa
prior to obtaining the second digital assistant response: receiving the second user input; and
44.a
prior to receiving the second digital assistant response: receiving the second user input; and
44.bb
outputting the second user input.
44.b
outputting the second user input.
45
The first user device of claim 44,
45
The first user device of claim 44,
45.aa
wherein the first user device receives the second user input using a first audio stream between the first user device and the second user device.
45.a
wherein the first user device receives the second user input and the second digital assistant response using a first audio stream between the first user device and the second user device.
46
The first user device of claim 19,
46
The first user device of claim 35,
46.aa
wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input.
46.a
wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input.
47
The first user device of claim 19,
47
The first user device of claim 35,
47.aa
wherein the first digital assistant response and the second digital assistant response are also output by the second user device.
47.a
The first user device of claim 35, wherein the first digital assistant response and the second digital assistant response are also output by the second user device.
48
The first user device of claim 47,
48
The first user device of claim 47,
48.aa
wherein the second user device outputs the first digital assistant response prior to obtaining the second user input, and
48.a
wherein the second user device outputs the first digital assistant response prior to receiving the second user input, and
48.bb
wherein the second user device outputs the second digital assistant response after the first user device obtains the second digital assistant response.
48.b
wherein the second user device outputs the second digital assistant response after the first user device receives the second digital assistant response.
49
The first user device of claim 19, wherein the one or more programs further include instructions for:
49
The first user device of claim 35, wherein the one or more programs further include instructions for:
49.aa
prior to obtaining the second digital assistant response: receiving an indication that a digital assistant of the second user device has been invoked;
49.a
prior to receiving the second digital assistant response: receiving an indication that a digital assistant of the second user device has been invoked;
49.bb
receiving a third user input;
49.b
receiving a third user input;
49.cc
determining, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and
49.c
determining, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and
49.dd
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgoing obtaining a third digital assistant response based on the third user input.
49.d
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgoing obtaining a third digital assistant response based on the third user input.
50
The first user device of claim 19, wherein the one or more programs further include instructions for:
50
The first user device of claim 35, wherein the one or more programs further include instructions for:
50.aa
after outputting the second digital assistant response: receiving a fourth user input;
50.a
after outputting the second digital assistant response: receiving a fourth user input;
50.bb
in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receiving a fifth user input;
50.b
in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receiving a fifth user input;
50.cc
obtaining a fourth digital assistant response based on the fifth user input;
50.c
obtaining a fourth digital assistant response based on the fifth user input;
50.dd
forgoing providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and
50.d
forgoing providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and
50.ee
outputting the fourth digital assistant response.
50.e
outputting the fourth digital assistant response.
51
The first user device of claim 19, wherein the one or more programs further include instructions for:
51
The first user device of claim 35, wherein the one or more programs further include instructions for:
51.aa
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determining whether the context information includes private information stored on the first user device; and
51.a
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determining whether the context information includes private information stored on the first user device; and
51.bb
in accordance with a determination that the context information includes private information stored on the first user device: removing at least a portion of the private information from the context information; and
51.b
in accordance with a determination that the context information includes private information stored on the first user device: removing at least a portion of the private information from the context information; and
51.cc
providing the first digital assistant response and a remaining context information associated with the first user input to the second user device.
51.c
providing the first digital assistant response and the remaining context information associated with the first user input to the second user device.
51.dd
The first user device of claim 19, wherein the one or more programs further include instructions for:
51.d
The first user device of claim 35, wherein the one or more programs further include instructions for:
51.ee
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determining whether the context information includes private information stored on the first user device; and
51.e
prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determining whether the context information includes private information stored on the first user device; and
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-16, 18 -34 and 36-50 are rejected under 35 U.S.C. 103 as being unpatentable over Woolsey et al. (US20160373571A1)(herein "Woolsey").
Regarding claim 1, 18 and 19, Woolsey teaches [A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first user device with a display, cause the first user device to: - claim 1], [A method, comprising: - claim 18], and [A first user device, comprising: a display; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, wherein the one or more programs include instructions for: - claim 19] (Woolsey, Par. 0087:” … a digital assistant in communications may be implemented. Computer system 3300 includes a processor 3305, a system memory 3311, … The drives and their associated computer-readable storage media provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for the computer system 3300. … In addition, as used herein, the term computer-readable storage media includes one or more instances of a media type (e.g., one or more magnetic disks, one or more CDs, etc.). For purposes of this specification and the claims, the phrase “computer-readable storage media” and variations thereof, does not include waves, signals, and/or other transitory and/or intangible communication media. …“, and Par. 0088:” … A monitor 3373 or other type of display device is also connected to the system bus 3314 via an interface, such as a video adapter 3375.”)
initiate/initiating a video communication session between the first user device and at least a second user device; (Woolsey, Par. 0036:” The devices 110 and communications network 115 may be configured to enable device-to-device communication. As shown in FIG. 2, such device-to-device communication 200 can include, for example, voice calls 205, messaging conversations 210, and video calls 215.”, and Par. 0071:” … the digital assistant can set up a conference bridge using voice or video and invite the meeting participants to join the bridge with the appropriate instructions. When the meeting is scheduled to start, the digital assistant can place a call into the conference bridge on the user's behalf.”, and Par. 0084:”FIG. 32 shows a flowchart of an illustrative method 3200 in which a digital assistant participates in a messaging session between local and remote parties. In step 3205 a messaging session is established between devices used by local and remote parties. The digital assistant sets up a listener so that during the messaging session the local user can invoke the digital assistant by saying a key word or phrase in step 3210. As the user speaks, the digital assistant listens in, as shown in step 3215.”)
receive/receiving a first user input; (Woolsey, Par. 0084:” … The digital assistant sets up a listener so that during the messaging session the local [first] user can invoke the digital assistant by saying [input] a key word or phrase in step 3210.“)
obtain/obtaining a first digital assistant response based on the first user input; (Woolsey, Par. 0084:” … As the user speaks, the digital assistant listens in, as shown in step 3215.”, and Par. 0085:” In step 3220, the digital assistant announces a request from the local [first] user using text messages that are sent to both the local [first] and remote [second] users which can be shown on the UI of the messaging app. In step 3225, the digital assistant determines an action it can take that is responsive to the user's speech.”)
provide/providing, to the second user device, the first digital assistant response and context information associated with the first user input; (Woolsey, Par. 0085:” … In typical implementations, applicable context is located and utilized when making the determination as is the case with the example of the voice and video calls described above. In step 3230, the digital assistant acknowledges the user's request and announces the action it is taking in response using text messages that are sent to both the local [first] and remote [second] users which can be shown on the UI of the messaging app.”).
output/outputting the first digital assistant response; (Woolsey, Par. 0086:” The digital assistant performs the action in step 3235.”)
Wooley does not explicitly teach receive context information associated with a second user input, wherein the second user input is received at the second user device; obtain a second digital assistant response, wherein the second digital assistant response is determined based on the second user input and the context information associated with the first user input; and output the second digital assistant response. However, Woolsey teaches in Figures 20, 21, 28, and 29 that during a phone call interactions occurring between two digital assistants ( for example) to schedule a meeting. A person skilled in the art would know that it is common during an attempt to schedule a meeting, the user receives an invitation for the said meeting and is triggered to accept or decline the invitation thereby generating a second user input and a second digital assistant response. Therefore, it would be obvious to one of ordinary skill in the art at the time of the invention to add this feature because it allows users to communicate with each other in compatible ways.
Regarding claims 2, 20, and 36, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively
Woolsey further teaches wherein the context information associated with the first user input comprises dialog history information associated with a user of the first user device during the video communication session. (Woolsey, Par. 0042:” FIG. 6 shows an illustrative taxonomy of functions 600 that may typically be supported by the digital assistant 350. Inputs to the digital assistant 350 typically can include user input 605 (in which such user input can include input from either or both the local and remote parties to a given communication), … data from internal sources 610 could include the current geolocation of the device 110 that is reported by a GPS (Global Positioning System) component on the device, or some other location-aware component. … to enable the digital assistant 350 to utilize contextual data 620 when it operates. Contextual data can include, for example, time/date, the user's location, language, …stored contacts (including, in some cases, links to a local user's or remote user's social graph such as those maintained by external social networking services), call history, messaging history, browsing history, device type, device capabilities, ...”).
Regarding claims 3, 21, and 37, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches wherein the context information associated with the first user input comprises data corresponding to a current location of the first user device when the first user device received the first user input. (Woolsey, Par. 0042:” FIG. 6 shows an illustrative taxonomy of functions 600 that may typically be supported by the digital assistant 350. Inputs to the digital assistant 350 typically can include user input 605 (in which such user input can include input from either or both the local and remote parties to a given communication), … data from internal sources 610 could include the current geolocation of the device 110 that is reported by a GPS (Global Positioning System) component on the device, or some other location-aware component. … to enable the digital assistant 350 to utilize contextual data 620 when it operates. Contextual data can include, for example, time/date, the user's location, language, …stored contacts (including, in some cases, links to a local user's or remote user's social graph such as those maintained by external social networking services), call history, messaging history, browsing history, device type, device capabilities, ...”).
Regarding claims 4, 22, and 38, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches wherein the first digital assistant response comprises at least one of: a natural-language expression corresponding to a task performed by a digital assistant of the first user device based on the first user input; and data retrieved by the digital assistant of the first user device based on the first user input. (Woolsey, Par. 0040:” As shown in FIG. 4, the digital assistant 350 can employ a natural language user interface (UI) 405 that can take voice commands 410 as inputs from the user 105.”)
Regarding claims 5, 23, and 39, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches wherein obtaining the first digital assistant response comprises: performing one or more tasks based on the first user input; and (Woolsey, Par. 0046:” … the digital assistant performs tasks, provides information, interacts with the user, etc.”).
determining the first digital assistant response based on results of the performance of the one or more tasks. (Woolsey, Par. 0055:” … the local user requests that the digital assistant send the user's location information [response] to the remote user. The user initiates the digital assistant by using the key phrase (“Hey Cortana” in this example). A text string “Listening” is again displayed on the phone app's UI 1600 as indicated by reference numeral 1615 in FIG. 16 to visually confirm to the local user that the digital assistant is listening in on the call and is ready to work on tasks, provide information, and the like.”)
Regarding claims 6, 24, and 40, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches wherein providing the first digital assistant response and the context information associated with the first user input to the second user device comprises transmitting the first digital assistant response and the context information associated with the first user input to the second user device. (Woolsey, Par. 0052:” After the local user initiates the digital assistant with the key phrase in this example, the user requests that the digital assistant send contact information for a restaurant to the remote user. The digital assistant responds at point 2 in the call at block 1210 in FIG. 12 by saying that the contact information will be sent to the remote user as a message. The generated audio in the digital assistant's response to the user's request can be heard by both the local and remote parties. The digital assistant can also refer to the remote user by name. Use of the name is an example of how the digital assistant can apply contextual data that is available to it so that its interactions with the parties are more natural and the overall user experience supported by the digital assistant is enhanced. That is, the digital assistant maintains an awareness of the call context and thus knows the identity of the remote user as well as other call parameters.”)
Regarding claims 7, 25, and 41, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 6, 24 and 40 respectively.
Woolsey further teaches wherein the first user device receives the context information associated with the second user input from the second user device. (Woolsey, Figure 20 discloses a phone call interaction occurring between two parties. i.e. to schedule a meeting. It is common that during a meeting scheduling the recipient (second user) receiving an invitation for a meeting is triggered to accept or reject the invitation (second user input) for the meeting thereby generating a second user input. Therefore, during an interaction between two parties, a user action (input) may be required, for example when scheduling a meeting, the remote (second) user device may trigger a reply/input which includes the appropriate contextual information (such as second user Calander’s availability). In this example, the context information is mapped to the invitation to a given restaurant, and subsequent remote user’s acceptance/confirmation along with the second user’s availability. Therefore, receiving the second user input by the first user, is an implicit feature when exchanging information between two callers for something like scheduling a meeting.
Regarding claims 8, 26, and 42, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 6, 24 and 40 respectively.
Woolsey further teaches prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: transmit the first user input to the second user device using a first audio stream between the first user device and the second user device. (Woolsey, Par. 0048:” As shown in FIG. 10, the audio from the microphone 320 is split into two streams at a split point 1005 so that both the phone and video call apps 335 and 345 as well as the digital assistant 350 can receive audio signals from the user 105. Audio from the apps is combined with audio generated by the digital assistant to create a combined audio stream 1010 so that the remote user at the far end of the communication can hear what both the local user and the digital assistant say.”)
Regarding claims 9, 27, and 43, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 8, 26 and 42 respectively.
Woolsey further teaches wherein the first digital assistant response is transmitted to the second user device using the first audio stream. (Woolsey, Par. 0048:” … Audio from the apps is combined with audio generated by the digital assistant to create a combined audio stream 1010 so that the remote user at the far end of the communication can hear what both the local user and the digital assistant say.”, and Par. 0052:” … The generated audio in the digital assistant's response to the user's request can be heard by both the local and remote parties.”)
Regarding claims 10, 28, and 44, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 6, 24 and 40 respectively.
Woolsey further teaches prior to obtaining the second digital assistant response: receive the second user input; and output the second user input. (Woolsey, Par. 0084:” … As the user speaks, the digital assistant listens in, as shown in step 3215.”, and Par. 0085:” In step 3220, the digital assistant announces a request from the local user using text messages that are sent to both the local and remote users which can be shown on the UI of the messaging app. In step 3225, the digital assistant determines an action it can take that is responsive to the user's speech.”, Par. 0052:” … the user requests that the digital assistant send contact information for a restaurant to the remote user. The digital assistant responds at point 2 in the call at block 1210 in FIG. 12 by saying that the contact information will be sent to the remote user as a message. ) Note: In a multi turn/high interaction dialog, the implied and normal feature is for the digital assistant response/action to be provided in an orderly fashion, i.e. response provided after receiving the user input.
Regarding claims 11, 29, and 45, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 10, 28 and 44 respectively.
Woolsey further teaches wherein the first user device receives the second user input using a first audio stream between the first user device and the second user device. (Woolsey, Par. 0048:” … as shown in FIG. 11, incoming audio 1110 from the remote party at the far end is split into two streams at a split point 1105 so that both the digital assistant 350 and the phone and video call apps 335 and 345 can receive the incoming audio. It is noted that the terms “user” and “party” may be used interchangeably in the discussion that follows.”) Note: In a multi turn/high interaction dialog, the implied and normal feature is for the participants to exchange plurality of input/audio stream among themselves.
PNG
media_image1.png
176
662
media_image1.png
Greyscale
Regarding claims 12, 30, and 46, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches wherein a digital assistant of the second user device uses the context information associated with the first user input to disambiguate the second user input. (Woolsey, Par.0078:” In step 3025, the digital assistant determines an action it can take that is responsive to the user's speech. In typical implementations, applicable context is located and utilized when making the determination. That is, the digital assistant can take different actions, in some cases, depending on context including call state. In addition, the digital assistant can be configured to ask questions of the user, for example, to clarify the request, or perform some follow-up interaction with the user as may be needed when completing a task.”, Fig. 20-21 show that context information can be used to disambiguate a user context is an implicit feature when exchanging information with regards to scheduling meeting) Note: Usage of context information to disambiguate a user context is an implicit feature when exchanging information between participant to carry out a specific action.
Regarding claims 13, 31, and 47, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches wherein the first digital assistant response and the second digital assistant response are also output by the second user device. (Woolsey, Par. 0078:” … Audio is injected into the stream [response] of the call so that the local and remote users can hear the digital assistant acknowledge the user's request and announce the action it is taking in response to the request (i.e., whether it be sharing contact information, taking a note, adding someone to the call, etc.) in step 3030.”) Note: acknowledging the user’s request by injecting into the audio stream reads on the first digital assistant response, and announcing the action is taking reads on the second digital assistant response.
Regarding claims 14, 32, and 48, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 13, 31 and 47 respectively.
Woolsey further teaches wherein the second user device outputs the first digital assistant response prior to obtaining the second user input, and (Woolsey, Par. 0059:” FIGS. 19 and 20 illustratively show how the digital assistant can be utilized in the course of a messaging conversation 210 between local and remote parties. UIs 1905 and 1910 are respectively exposed by messaging apps on the local and remote devices. Chains of text messages are shown in each UI with outgoing messages being shown on the right side and incoming messages from the other party being shown on the left side.“) Note: As depicted in Fig. 19, local (first) user device is outputting “are you coming to dinner with us?”, which appears on the left side of the second (remote) user device. Subsequently the second user input “yes, I’m really looking forward to it. Where is the restaurant?
wherein the second user device outputs the second digital assistant response after the first user device obtains the second digital assistant response. (Woolsey, Fig. 19 depict the second user device outputting the second digital assistant response after the first user device obtains the second digital assistant response. First user obtains “Ron Asked Cortana: send Asian Fusion contact information to Mark Howard”, whereas the first user device obtains “yes, I’m really looking forward to it. Where is the restaurant? From the second response.” Note: generally, responding to a question prior to receiving a question is considered an implicit feature when exchanging a multi turn transaction.
Regarding claims 15, 33, and 49, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches prior to obtaining the second digital assistant response: receive an indication that a digital assistant of the second user device has been invoked; receive a third user input; (Woolsey, Par. 0060:” At some point during the exchange of text messages, the local user launches the digital assistant by saying the key phrase “Hey Cortana” as indicated by reference numeral 1915. The local user then verbally requests the digital assistant to send contact information to the remote user.”) Note: using a voice input to invoke a digital assistant and subsequently receive response/action from it is known to a PHOSITA. Here in the invocation phrase of “Hey Cortona” awake/invoke the given digital assistant and subsequently passes the requested info to the other device, by which it indicates that the digital assistant has been invoked.
determine, based on the received indication, whether the digital assistant of the second user device was invoked prior to receiving the third user input; and (Woolsey, Par. 0060:” At some point during the exchange of text messages, the local user launches the digital assistant by saying the key phrase “Hey Cortana” as indicated by reference numeral 1915. The local user then verbally requests the digital assistant to send contact information to the remote user.”) Note: using a voice input to invoke a digital assistant and subsequently receive response/action from it is known to a PHOSITA. Here in the invocation phrase of “Hey Cortona” awake/invoke the given digital assistant. When Ron Smith receives the message from Mark Howard as shown in Fig. 19, that “the place is called “Asian Fusion.” I’ll send location information to you.” Is an indication that Mark Howard is about/or already invoked the digital Assistant on his side, where he will ask the digital assistant to provides the address of the restaurant.
in accordance with a determination that the digital assistant of the second user device was invoked prior to receiving the third user input, forgo obtaining a third digital assistant response based on the third user input. (Woolsey, In the context of Fig. 19, the required information is to obtain the “Asian Fusion” address, which is provided by the digital assistant from Mark Howard side, subsequently, Ron Smith having received the needed information, does not require obtaining a third digital assistant. When no requirement exists to invoke a digital assistant, PHOSITA knows and forgo obtaining such a feature.
Regarding claims 16, 34, and 50, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey further teaches after outputting the second digital assistant response: receive a fourth user input; in accordance with a determination that the fourth user input represents a user intent to provide a private digital assistant request: receive a fifth user input; obtain a fourth digital assistant response based on the fifth user input; forgo providing the fourth digital assistant response and context information associated with the fifth user input to the second user device; and output the fourth digital assistant response. (Woolsey does not explicitly teach a third, fourth, and fifth user input can be used to determine a digital assistant response. It would be obvious to one of ordinary skill in the art at the time of the invention to add more user inputs as needed because it allows plurality of users to communicate with each other in compatible ways and is a matter of design choice.)
Claims 17, 35 and 51 are rejected under 35 U.S.C. 103 as being unpatentable over Woolsey, and further view of Lovitt et al. (US201803661181A1)(herein "Lovitt").
Regarding claims 17, 35, and 51, Woolsey teaches the non-transitory computer-readable storage medium, the method, and the device of claims 1, 18 and 19 respectively.
Woolsey does not explicitly teach that private or confidential data can be protected when exchanged (prior to providing the first digital assistant response and the context information associated with the first user input to the second user device: determine whether the context information includes private information stored on the first user device; and in accordance with a determination that the context information includes private information stored on the first user device: remove at least a portion of the private information from the context information; and provide the first digital assistant response and a remaining context information associated with the first user input to the second user device). However, Lovitt teaches a system that includes protecting private information exchanged data (Levitt, Par. 0056).
Therefore, it would be obvious to one of ordinary skill in the art at the time of the invention to implement data security within the system, as doing so enhances the security of the digital assistant.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Hansen et al. (US11038934) show a system, method, and medium for a digital assistant that can send context information to other devices and cause the second device to perform the tasks.
Examiner's Note: Examiner has cited particular columns and line numbers and/or paragraph numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARIOUSH AGAHI whose telephone number is (408)918-7689. The examiner can normally be reached Monday - Thursday and alternate Fridays, 7:30-4:30 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DARIOUSH AGAHI, P.E.
Primary Examiner
/DARIOUSH AGAHI/Primary Examiner, Art Unit 2656