Prosecution Insights
Last updated: April 18, 2026
Application No. 18/597,733

DIGITAL ASSISTANT USER INTERFACES AND RESPONSE MODES

Non-Final OA §103
Filed
Mar 06, 2024
Examiner
TRAN, TUYETLIEN T
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
429 granted / 637 resolved
+12.3% vs TC avg
Strong +33% interview lift
Without
With
+33.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
22 currently pending
Career history
659
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 637 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to the following communication: The claims filed on 03/06/2024. This action is made non-final. Claims 1-29 are pending in the case. Claims 1, 28, and 29 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Kingdom of Denmark on 08/24/2020. It is noted, however, that applicant has not filed a certified copy of the DKPA202070547 and DKPA202070548 applications as required by 37 CFR 1.55. It is noted that the notices of unsuccessful retrieval of the document were mailed out on 04/10/2024. Information Disclosure Statement The information disclosure statement filed 03/15/2024 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 28, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton). As to claim 1, Kudurshian discloses: An electronic device (see Fig. 2A and ¶ 0049), comprising: a display (see Fig. 2A and ¶ 0055); one or more processors (see Fig. 2A and ¶ 0061); a memory (see Fig. 2A and ¶ 0059); and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors (see Fig. 2A and ¶ 0060), the one or more programs including instructions for: receiving a natural language input (see Fig. 8A and ¶ 0255; the digital assistant is instantiated in response to receiving a pre-determined phrase; For example, the digital assistant is invoked in response to receiving a phrase such as “Hey, Assistant,” “Wake up, Assistant,” “Listen up, Assistant,” “OK, Assistant,” or the like. ¶ 0256; speech inputs); initiating a digital assistant (see Fig. 8A and ¶ 0260; In response to the speech input, the user device instantiates the digital assistant represented by affordance 840 or 841 such that the digital assistant is actively monitoring subsequent speech inputs); in accordance with initiating the digital assistant, obtaining a response package responsive to the natural language input (see ¶ 0257; the digital assistant identifies context information associated with the user device. ¶ 0092; the digital assistant can also use the contextual information to determine how to prepare and deliver outputs to the user. Contextual information can be referred to as context data. ¶ 0045-0046; generating output responses to the user in an audible (e.g., speech) and/or visual form; In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms, e.g., as text, alerts, music, videos, animations, etc). Kudurshian does not appear to teach, but Wolverton is relied upon for teaching the limitations: after receiving the natural language input, selecting, based on context information associated with the electronic device, a first response mode of the digital assistant from a plurality of digital assistant response modes (¶ 0019; a vehicle personal assistant to receive human-generated conversational spoken natural language input. ¶ 0020; The vehicle personal assistant may determine whether to search another data source based on the current vehicle-related context. ¶ 0021, 0073, 0074, 0114; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode. The plurality of possible presentation modes may include machine-generated conversational spoken natural language, text, recorded audio, recorded video, and/or digital images); and in response to selecting the first response mode, presenting, by the digital assistant, the response package according to the first response mode (¶ 0021, 0122; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode). Both references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface disclosed in Kudurshian to include the specific feature of selecting a response mode based on context information as taught by Wolverton such that the user can interact with the digital assistant to get the best response in an appropriate format as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Wolverton that allows the user to view the response in the best format that suitable to the device context; thus, enhance user experience with the user interface (Wolverton: see ¶ 0021). As to claim 28, claim 28 is directed to a non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display, cause the electronic device to perform similar features as recited in independent claim 1; therefore, is rejected under similar rationale. (Kudurshian: see Fig. 2A and ¶ 0059-0060). As to claim 29, claim 29 is directed to method for operating a digital assistant, the method comprising steps for performing similar features as claimed in claim 1; therefore, is rejected under similar rationale. (Kudurshian: see ¶ 0008-0009). Claims 2-5, 13, 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of VAN OS et al. (US 2015/0382047 A1; hereinafter VAN OS). As to claim 2, the rejection of claim 1 is incorporated. Kudurshian and Wolverton further teach wherein the response package includes: first text associated with a digital assistant response affordance (see Fig. 8C and ¶ 0265; the text in 822); and second text associated with the digital assistant response affordance (see Fig. 8B and ¶ 00264; text corresponding to speech input 854). Additionally, VAN OS is relied upon for teaching the limitations of claim 2. Specifically, VAN OS teaches a virtual assistant user interface (see Fig. 4E, 5 and ¶ 0099) comprising response package includes: first text associated with a digital assistant response affordance (see Fig. 5-6B and ¶ 0102, 0106; selection of text link 514 can provide additional detailed information about the media content or other virtual assistant query result); and second text associated with the digital assistant response affordance (see Fig. 5 and ¶ 0101-0102; selectable text links 514). The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with text response affordance disclosed in Kudurshian/Wolverton to include the specific feature of expanding the response affordance to display a detailed response as taught by VAN OS such that the user can interact with the digital assistant to get a more detailed response as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in VAN OS that allows the user to get a relevant output from the virtual assistant; thus, enhance user experience with the virtual assistant user interface (VAN OS: see ¶ 0004-0005). As to claim 3, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS further teach wherein the second text has fewer words than the first text (VAN OS: see Fig. 5-6B and ¶ 0101-0102, 0106). Combining Kudurshian/Wolverton/VAN OS would meet the claimed limitations for the same reasons as set forth in claim 2. As to claim 4, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS further teach: wherein selecting the first response mode includes determining whether to: display the second text without providing audio output representing the second text; or provide the audio output representing the second text without displaying the second text (Wolverton: ¶ 0021, 0073, 0074, 0114; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode. The plurality of possible presentation modes may include machine-generated conversational spoken natural language [~audio output], text [~visual output], recorded audio, recorded video, and/or digital images). Combining Kudurshian/Wolverton/VAN OS would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 5, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS further teach: wherein selecting the first response mode includes determining whether to provide audio output representing the first text (Wolverton: ¶ 0021, 0073, 0074, 0114; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode. The plurality of possible presentation modes may include machine-generated conversational spoken natural language [~audio output representing the first text]). Combining Kudurshian/Wolverton/VAN OS would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 13, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS further teach: wherein: the first response mode is a mixed response mode; and presenting, by the digital assistant, the response package according to the first response mode includes displaying the digital assistant response affordance and providing second audio output representing the second text without displaying the second text (Kudurshian: ¶ 0045-0046; generating output responses to the user in an audible (e.g., speech) and/or visual form; In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms, e.g., as text, alerts, music, videos, animations, etc). As to claim 20, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS further teach: wherein: the first response mode is a voice response mode; and presenting, by the digital assistant, the response package according to the first response mode includes providing audio output representing the first text (Kudurshian: ¶ 0045-0046; generating output responses to the user in an audible (e.g., speech) and/or visual form; In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms, e.g., as text, alerts, music, videos, animations, etc). As to claim 21, the rejection of claim 20 is incorporated. Kudurshian/Wolverton/VAN OS further teach: wherein: the context information includes a determination that the electronic device is in a vehicle; and selecting the first response mode is based on the determination that the electronic device is in the vehicle (Wolverton: ¶ 0055; certain inputs 102 may have a different meaning depending on whether the vehicle 104 is turned on or off, whether the user is situated in the driver's seat or a passenger seat, whether the user is inside the vehicle, standing outside the vehicle, or simply accessing the vehicle personal assistant 112 from a computer located inside a home or office, or even whether the user is driving the vehicle 104 at relatively high speed on a freeway as opposed to being stuck in traffic or on a country road. For instance, if the vehicle personal assistant 112 detects that it is connected to the vehicle 104 and the vehicle 104 is powered on, the reasoner 136 can obtain specific information about the vehicle 104 (e.g., the particular make, model, and options that are associated with the vehicle's VIN or Vehicle Identification Number). Such information can be obtained from the vehicle manufacturer and stored in a vehicle-specific configuration 122 portion of the vehicle context model 116. ¶ 0021, 0073, 0074, 0114; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode. The plurality of possible presentation modes may include machine-generated conversational spoken natural language, text, recorded audio, recorded video, and/or digital images). Combining Kudurshian/Wolverton/VAN OS would meet the claimed limitations for the same reasons as set forth in claim 1. Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of VAN OS et al. (US 2015/0382047 A1; hereinafter VAN OS) and Cohen et al. (US 2020/0310742 A1; hereinafter Cohen). As to claim 6, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS do not expressly teach, but Cohen is relied upon for teaching the limitations of claim 6: wherein: the first response mode is a silent response mode (Cohen: see ¶ 0017; whispering or muted volume level); and presenting, by the digital assistant, the response package according to the first response mode includes: displaying the digital assistant response affordance; and displaying the second text without providing second audio output representing the second text (Cohen: see ¶ 0017; the output volume level maybe a muted volume level, and other output modalities may be utilized instead, such as text output instead of audio output. ¶ 0091; the determined output volume level may be a mute level, indicating that no audio output should be provided. Additionally or alternatively, other output modalities may be utilized together with or instead of audio modality, such as text-based output, graphical output, haptic output, or the like. In some exemplary embodiments, Output Volume Level Determinator of FIG. 2 may be utilized to determine the output volume level based on the interaction context). The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with a response affordance disclosed in Kudurshian/Wolverton/VAN OS to include the specific feature of determine the output format based on context as taught by Cohen such that the user can interact with the digital assistant to get a more detailed response as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Cohen that allows the user to receive the response in the suitable format that fits his/her context so that it does not disturb other people and/or protect the user’s privacy (Cohen: see ¶ 0017). As to claim 7, the rejection of claim 6 is incorporated. Kudurshian/Wolverton/VAN OS/Cohen further teach: wherein: the context information includes a digital assistant voice feedback setting; and selecting the first response mode is based on determining that the digital assistant voice feedback setting indicates to not provide voice feedback (Cohen: see ¶ 0019; the feedback of the user. ¶ 0025, 0038, 0070; the determination of the output volume level may improve over time, using user feedback). Combining Kudurshian/Wolverton/VAN OS/Cohen would meet the claimed limitations for the same reasons as set forth in claim 6. As to claim 8, the rejection of claim 6 is incorporated. Kudurshian/Wolverton/VAN OS/Cohen further teach: wherein: the context information includes a detection of physical contact of the electronic device to initiate the digital assistant (Kudurshian: see ¶ 0090; the digital assistant client module 229 can be capable of accepting voice input (e.g., speech input), text input, touch input, and/or gestural input through various user interfaces (e.g., microphone 213, accelerometer(s) 268, touch-sensitive display system 212, optical sensor(s) 264, other input control devices 216, etc.) of portable multifunction device 200); and selecting the first response mode is based on the detection of the physical contact (Kudurshian: see ¶ 0045-0046; provide responses in visual or audio forms. Wolverton: ¶ 0021, 0073, 0074, 0114; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode. The plurality of possible presentation modes may include machine-generated conversational spoken natural language, text, recorded audio, recorded video, and/or digital images). Combining Kudurshian/Wolverton/VAN OS/Cohen would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 14, the rejection of claim 13 is incorporated. Kudurshian/Wolverton/VAN OS/Cohen further teach: wherein: the context information includes a digital assistant voice feedback setting; and selecting the first response mode is based on determining that the digital assistant voice feedback setting indicates to provide voice feedback (Cohen: see ¶ 0019; the feedback of the user. ¶ 0025, 0038, 0070; the determination of the output volume level may improve over time, using user feedback). Combining Kudurshian/Wolverton/VAN OS/Cohen would meet the claimed limitations for the same reasons as set forth in claim 6. As to claim 15, the rejection of claim 13 is incorporated. Kudurshian/Wolverton/VAN OS/Cohen further teach: wherein: the context information includes a detection of physical contact of the electronic device to initiate the digital assistant; (Kudurshian: see ¶ 0090; the digital assistant client module 229 can be capable of accepting voice input (e.g., speech input), text input, touch input, and/or gestural input through various user interfaces (e.g., microphone 213, accelerometer(s) 268, touch-sensitive display system 212, optical sensor(s) 264, other input control devices 216, etc.) of portable multifunction device 200); and selecting the first response mode is based on the detection of the physical contact (Kudurshian: see ¶ 0045-0046; provide responses in visual or audio forms. Wolverton: ¶ 0021, 0073, 0074, 0114; The vehicle personal assistant may select a presentation mode from a plurality of possible presentation modes based on the current vehicle-related context and present the reply using the selected presentation mode. The plurality of possible presentation modes may include machine-generated conversational spoken natural language, text, recorded audio, recorded video, and/or digital images). Combining Kudurshian/Wolverton/VAN OS/Cohen would meet the claimed limitations for the same reasons as set forth in claim 1. Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of VAN OS et al. (US 2015/0382047 A1; hereinafter VAN OS), Cohen et al. (US 2020/0310742 A1; hereinafter Cohen), and Van Os et al. (US 10284812 B1; hereinafter Van Os_II). As to claim 9, the rejection of claim 6 is incorporated. Kudurshian/Wolverton/VAN OS/Cohen do not expressly teach, but Van Os_II is relied upon for teaching the limitations of claim 9: wherein: the context information includes whether the electronic device is in a locked state; and selecting the first response mode is based on determining that the electronic device is not in the locked state (Van Os_II: see Col. 56, lines 6-49, Col. 47, lines 36-67; In some embodiments, a relevant context of the device is whether the device is in a locked or unlocked state. In some embodiments, when the device is locked, an alert of a first type (e.g., an audio or haptic output) is issued; when the device is unlocked (e.g., when a user is actively operating/interacting with the device), an alert is not issued (e.g., suppressed)). The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with a response affordance disclosed in Kudurshian/Wolverton/VAN OS/Cohen to include the specific feature of determine the output format based on context data that includes whether the device is in a locked state or not as taught by Van Os_II such that the user can interact with the digital assistant to get an appropriate response as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Van Os_II that allows the user to receive the feedback even when the device is in a locked state; thus, enhance user experience with the mobile device (Van Os_II: see Col. 56, lines 6-49). As to claim 10, the rejection of claim 6 is incorporated. Kudurshian/Wolverton/VAN OS/Cohen/Van Os_II further teach: wherein: the context information includes whether a display of the electronic device was displaying before initiating the digital assistant; and selecting the first response mode is based on determining that the display was displaying before initiating the digital assistant (Van Os_II: see Col. 56, lines 6-49, Col. 47, lines 36-67; In some embodiments, a relevant context of the device is whether the device is in a locked or unlocked state. In some embodiments, when the device is locked, an alert of a first type (e.g., an audio or haptic output) is issued; when the device is unlocked (e.g., when a user is actively operating/interacting with the device), an alert is not issued (e.g., suppressed) [when the device is in a locked state, the display was not displaying before initiating the assistant]). Combining Kudurshian/Wolverton/VAN OS/Cohen/Van Os_II would meet the claimed limitations for the same reasons as set forth in claim 9. Claims 4-6, 8, 11, 12, 15, 18-19, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of VAN OS et al. (US 2015/0382047 A1; hereinafter VAN OS) and Min et al. (US 2018/0286392 A1; hereinafter Min). As to claim 4, the rejection of claim 2 is incorporated. Min is additionally relied upon for teaching the limitations of claim 4. Specifically, Min teaches wherein selecting the first response mode includes determining whether to: display the second text without providing audio output representing the second text (see Figs. 3, 4, 6 and ¶ 0055-0056; visual output mode; i.e., the gesture input mode, the user may be able to select “gesture/display only”); or provide the audio output representing the second text without displaying the second text (Min: see Figs. 3, 4, 6 and ¶ 0049; responses to subsequently received voice commands/queries are presented audibly, e.g., on the speaker(s) 302 shown in FIG. 3). The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with text response affordance disclosed in Kudurshian/Wolverton/VAN OS to include the specific feature of presenting the responses in a selected format as taught by Min such that the user view the response in a desired format as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Min that makes it easier for the user to interact with the virtual assistance (Min: see ¶ 0002). As to claim 5, the rejection of claim 2 is incorporated. Kudurshian/Wolverton/VAN OS/Min further teach: wherein selecting the first response mode includes determining whether to provide audio output representing the first text (Min: see Figs. 3, 4, 6 and ¶ 0049, 0055-0056). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. As to claim 6, the rejection of claim 2 is incorporated. Claim 6 is additionally rejected based on the combination of Kudurshian/Wolverton/VAN OS/Min where it teaches: wherein: the first response mode is a silent response mode (Min: see Figs. 3, 4, 6, 8 and ¶ 0055-0056; In the gesture input mode, for example, the user may be able to select “gesture/display only” as one selection; visual output mode [~silent response mode]); and presenting, by the digital assistant, the response package according to the first response mode includes: displaying the digital assistant response affordance; and displaying the second text without providing second audio output representing the second text (Min: see ¶ 0055-0056; The user may instead decide to select visual output mode, in which case responses to gestures are presented visually, such as on any of the displays). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. As to claim 8, the rejection of claim 6 is incorporated. Claim 8 is additionally rejected based on the combination of Kudurshian/Wolverton/VAN OS/Min where it teaches: wherein: the context information includes a detection of physical contact of the electronic device to initiate the digital assistant (Min: see Fig. 8 and ¶ 0069; initiate gesture input mode with button includes touch 808 of a button); and selecting the first response mode is based on the detection of the physical contact (Min: see Fig. 8 and ¶ 0069, 0059-0060; initiate gesture input mode). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. As to claims 11 and 18, the rejection of claim 6 (13) is incorporated. Kudurshian/Wolverton/VAN OS/Min further teach: wherein: the context information includes a detection of a touch performed on the electronic device within a predetermined duration before selecting the first response mode; and selecting the first response mode is based on the detection of the touch (Min: see Fig. 8 and ¶ 0069, 0059-0060; initiate gesture input mode with button includes touch 808 of a button). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. As to claims 12 and 19, the rejection of claim 6 (13) is incorporated. Kudurshian/Wolverton/VAN OS/Min further teach: wherein: wherein: the context information includes a detection of a predetermined gesture of the electronic device within a second predetermined duration before selecting the first response mode; and selecting the first response mode is based on the detection of the predetermined gesture (Min: see Fig. 8 and ¶ 0008, 0069, 0059-0060; initiate gesture input mode with detected gesture). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. As to claim 15, the rejection of claim 13 is incorporated. Claim 15 is additionally rejected based on the combination of Kudurshian/Wolverton/VAN OS/Min where it teaches: wherein: the context information includes a detection of physical contact of the electronic device to initiate the digital assistant (Min: see Fig. 8 and ¶ 0069; initiate gesture input mode with button includes touch 808 of a button); and selecting the first response mode is based on the detection of the physical contact (Min: see Fig. 8 and ¶ 0069, 0059-0060; initiate gesture input mode). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. As to claim 23, the rejection of claim 20 is incorporated. Kudurshian/Wolverton/VAN OS/Min further teach where it teaches: wherein: the context information includes a detection of a voice input to initiate the digital assistant; and selecting the first response mode is based on the detection of the voice input (Min: see Figs. 3, 4, 6 and ¶ 0048-0049, 0052; responses to subsequently received voice commands/queries are presented audibly, e.g., on the speaker(s) 302 shown in FIG. 3); and selecting the first response mode is based on the detection of the physical contact (Min: see Figs. 3, 4, 6 and ¶ 0048-0049, 0052; audio output). Combining Kudurshian/Wolverton/VAN OS/Min would meet the claimed limitations for the same reasons as set forth in claim 4. Claims 16-17, 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of VAN OS et al. (US 2015/0382047 A1; hereinafter VAN OS) and Van Os et al. (US 10284812 B1; hereinafter Van Os_II). As to claims 16 and 24, the rejection of claim 13 (20) is incorporated. Kudurshian/Wolverton/VAN OS do not expressly teach, but Van Os_II is relied upon for teaching the limitations of claim 16: wherein: the context information includes whether the electronic device is in a locked state; and selecting the first response mode is based on determining that the electronic device is not in the locked state (Van Os_II: see Col. 56, lines 6-49, Col. 47, lines 36-67; In some embodiments, a relevant context of the device is whether the device is in a locked or unlocked state. In some embodiments, when the device is locked, an alert of a first type (e.g., an audio or haptic output) is issued; when the device is unlocked (e.g., when a user is actively operating/interacting with the device), an alert is not issued (e.g., suppressed)). The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with a response affordance disclosed in Kudurshian/Wolverton/VAN OS to include the specific feature of determine the output format based on context data that includes whether the device is in a locked state or not as taught by Van Os_II such that the user can interact with the digital assistant to get an appropriate response as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Van Os_II that allows the user to receive the feedback even when the device is in a locked state; thus, enhance user experience with the mobile device (Van Os_II: see Col. 56, lines 6-49). As to claims 17 and 25, the rejection of claim 13 (20) is incorporated. Kudurshian/Wolverton/VAN OS/Van Os_II further teach: wherein: the context information includes whether a display of the electronic device was displaying before initiating the digital assistant; and selecting the first response mode is based on determining that the display was displaying before initiating the digital assistant (Van Os_II: see Col. 56, lines 6-49, Col. 47, lines 36-67; In some embodiments, a relevant context of the device is whether the device is in a locked or unlocked state. In some embodiments, when the device is locked, an alert of a first type (e.g., an audio or haptic output) is issued; when the device is unlocked (e.g., when a user is actively operating/interacting with the device), an alert is not issued (e.g., suppressed) [when the device is in a locked state, the display was not displaying before initiating the assistant]). Combining Kudurshian/Wolverton/VAN OS/Van Os_II would meet the claimed limitations for the same reasons as set forth in claim 16. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of VAN OS et al. (US 2015/0382047 A1; hereinafter VAN OS) and Rakshit et al. (US 2020/0098358 A1; hereinafter Rakshit). As to claim 22, the rejection of claim 20 is incorporated. Kudurshian/Wolverton/VAN OS do not appear to teach, but Rakshit is relied upon for teaching the limitations of claim 22. wherein: the context information includes a determination that the electronic device is coupled to an external audio output device; and selecting the first response mode is based on the determination that the electronic device is coupled to the external audio output device (¶ 0075, 0015; the response sensitivity determination module 310 determines, based on the audio output settings of the assistant device 210, whether the response will be played over internal speakers, through headphones, through external wireless/network speakers in a public environment or private environment (e.g., a car), etc.. ¶ 0066; output the response by text or audio only) The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with text response affordance disclosed in Kudurshian/Wolverton/VAN OS to include the specific feature of presenting the responses in an audio format as taught by Rakshit such that the user view/hear the response in a desired format as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Rakshit that presents contextually appropriate responses to user queries; thus, enhance user experience with the virtual assistant (Rakshit: see ¶ 0014-0015). Claims 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Kudurshian et al. (US 2017/0358305 A1; hereinafter as Kudurshian) in view of Wolverton et al. (US 2014/0136187 A1; hereafter Wolverton) further in view of Min et al. (US 2018/0286392 A1; hereinafter Min). As to claim 26, the rejection of claim 1 is incorporated. Kudurshian/Wolverton do not explicitly teach, but Min is relied upon for teaching the limitations of claim 26. Specifically, Min teaches wherein the one or more programs further include instructions for: after presenting, by the digital assistant, the response package, receiving a second natural language input responsive to the presentation of the response package (see ¶ 0057; the monitoring of block 700 may always be enabled so that input mode can be switched from voice to gesture and back again at any time [~in other words, the user can use voice input to switch from gesture mode to voice mode}. ¶ 0048; the voice input mode may be established by a spoken word such as “voice” [~natural language input]); obtaining a second response package responsive to the second natural language input (see ¶ 0007, 0046; responses); and after receiving the second natural language speech input, selecting a second response mode of the digital assistant from the plurality of digital assistant response modes, wherein the second response mode is different from the first response mode (see ¶ 0057; the monitoring of block 700 may always be enabled so that input mode can be switched from voice to gesture and back again at any time [~in other words, the user can use voice input to switch from gesture mode to voice mode}. ¶ 0048; the voice input mode may be established by a spoken word such as “voice” [~natural language input]. ¶ 0009, 0055-0056; voice input mode vs gesture input mode); and in response to selecting the second response mode, presenting, by the digital assistant, the second response package according to the second response mode (Min: see Figs. 3, 4, 6 and ¶ 0049, 0055-0056; responses to subsequently received voice commands/queries are presented audibly, e.g., on the speaker(s) 302 shown in FIG. 3). The references, each is directed to visual assistant user interface; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the digital assistant user interface with text response affordance disclosed in Kudurshian/Wolverton to include the specific feature of presenting the responses in a selected format as taught by Min such that the user view the response in a desired format as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., digital assistant), and the advantage described in Min that makes it easier for the user to interact with the virtual assistance (Min: see ¶ 0002). As to claim 27, the rejection of claim 1 is incorporated. Kudurshian/Wolverton/Min further teach wherein selecting the first response mode of the digital assistant is performed after obtaining the response package (see ¶ 0007, 0046; responses). Combining Kudurshian/Wolverton/Min would meet the claimed limitations for the same reasons as set forth in claim 26. Conclusion The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275,277 (CCPA 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUYETLIEN T TRAN whose telephone number is (571)270-1033. The examiner can normally be reached M-F: 8:00 AM - 8:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Irete (Fred) Ehichioya can be reached at 571-272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUYETLIEN T TRAN/ Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Sep 19, 2024
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103
Mar 18, 2026
Examiner Interview Summary
Mar 18, 2026
Applicant Interview (Telephonic)
Mar 30, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602153
SIGNAL TRACKING AND OBSERVATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12586104
OBJECT DISPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585376
SYSTEMS AND METHODS OF REDUCING OBSTRUCTION BY THREE-DIMENSIONAL CONTENT
2y 5m to grant Granted Mar 24, 2026
Patent 12585377
SYSTEM AND METHOD FOR HANDLING OVERLAPPING OBJECTS IN VISUAL EDITING SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12573257
DIGITAL JUKEBOX DEVICE WITH IMPROVED USER INTERFACES, AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.0%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 637 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month