Detailed Action
Notice of Pre-AIA or AIA status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements filed on December 26, 2023, and February 14, 2025 comply with the provisions of 37 C.F.R. § 1.97, 1.98, and MPEP § 609, and therefore have been placed in the application file. The information referred to therein has been considered as to the merits.
Claim Objections
The Office objects to claims 13 and 17 for having the following minor informalities. Appropriate correction is required:
In claim 13, the phrase “controlling, via the automated assistant, a vehicle that the computing device is attached” is missing the necessary preposition for specifying that the computing device is attached to the vehicle.
In claim 17, the phrase “the additional interaction data” lacks antecedent basis, because that element is originally introduced as “additional client interaction data.”
Claim Rejections – 35 U.S.C. § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
I. Paulus discloses claims 1, 5, 6, 11, 12, 14, and 15.
Claims 1, 5–6, 11–12, and 14–15 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by U.S. Patent No. 9,785,534 B1 (“Paulus”).
For the sake of clarity and legibility, all quotes from the Paulus reference in this rejection have been modified to reduce all UPPERCASE PART NAMES to lowercase.
Claim 1
Paulus discloses:
A method implemented by one or more processors, the method comprising:
“FIG. 1 illustrates a block diagram of a production environment 100 for providing an interactive software system with a user experience customized to facilitate progress and prevent abandonment of the interactive software system,” Paulus col. 15 ll. 4–13, and FIG. 3 illustrates a method 300 performed by the production environment 100. See Paulus col. 27 ll. 61–64.
However, since Paulus’s discussion of FIG. 1 provides greater (and fully encompassing) detail of the process performed by the production environment 100, this rejection will focus on that section of the reference, rather than the flowchart of FIG. 3. As a reminder, “if a prior art device, in its normal and usual operation, would necessarily perform the method claimed, then the method claimed will be considered to be anticipated by the prior art device.” MPEP § 2112.02.
determining, based on client interaction data, a prior interaction a user has had with an automated assistant that is accessible via a computing device, wherein the client interaction data is generated based on the prior interaction between the user and the automated assistant;
Turning now to FIG. 1, an “analytics module selection engine 126 can [] use user data 116 to retrieve prior user data 151 from interactive software system support computing environment 150.” Paulus col. 23 ll. 8–12. The user data 116 includes data “from input devices 141 of user computing environment 140” via “user interface 115,” Paulus col. 17 ll. 45–56, and the prior user data 151 includes “user data obtained during or prior to a user's current interaction with an interactive software system.” Paulus col. 23 ll. 12–14.
The interactive software system falls within the scope of an automated assistant because it includes “any software system that provides an interactive user experience to its users.” Paulus col. 10 ll. 20–22. Moreover, several of Paulus’s examples for interactive software systems fall within an even narrower interpretation of “automated assistant,” such as software that assists users with tax preparation, financial management, personal management, accounting, and personal electronic data management. Paulus col. 10 ll. 20–58.
Claim Element
Paulus
client interaction data
user data 116
prior interaction
prior user data 151
automated assistant
interactive software system
selecting, based on the prior interaction the user had with the automated assistant, a particular interaction cohort for classifying the user and/or the prior interaction,
“[U]ser data 116 and/or prior user data 151 can be used by user experience customization engine 112 and/or selected interchangeable analytics module 113 to associate a user with a particular predetermined profile, e.g., with a set of criteria or with a group of users who share one or more characteristics in common with the user.” Paulus col. 23 ll. 51–57.
wherein the particular interaction cohort is selected from a plurality of interaction cohorts
The group of users that user experience customization engine 112 identifies from the user data and/or prior user data are a “segment” of users of production environment 100, who are all “segmented” into different “segments” of the overall user population. See, e.g., Paulus col. 24 ll. 25–33; col. 30 ll. 61–67; and col. 31 ll. 19–40. For example, Paulus discloses that one way to “segment” the users into distinct groups or segments is to provide preexisting demographic “profiles 123,” and “associate a user with a particular one of profiles 123.” Paulus col. 21 ll. 17–35; see also col. 23 ll. 51–57 (relating the “group” back to the concept of associating the user with one of the plurality of profiles 123).
that vary according to an estimated level of experience a particular user has had with the automated assistant and/or a feature of the automated assistant;
The group chosen as the user’s peers include “other users who share or who have shared similar abandonment indicator data and/or who share similar user data characteristics.” Paulus col. 23 ll. 61–63. The commonly shared abandonment indicator data is indicative of a group’s level of experience with the interactive software system because it includes “historical user data,” “the speed with which the user touches hardware associated with the interactive software system,” “and/or data acquired from measuring the user's interactions with hardware associated with an interactive software system.” Paulus col. 18 ll. 5–46.
Each of those examples fall within the broadly recited scope of “estimated level of experience,” because, by definition, each interaction with the interactive software system involves the user experiencing the interactive software system, and the abandonment indicator data necessarily counts the number of such experiences.
Claim Element
Paulus
particular interaction cohort
the user’s peer group, segment, and/or profile 123
plurality of interaction cohorts
the whole set of groups, segments, and/or profiles 123 maintained by environment 100
level of experience
abandonment indicator data
subsequent to selecting the particular interaction cohort for classifying the user and/or the prior interaction:
generating, based on the particular interaction cohort selected for the user and/or the prior interaction, assistant content for rendering at an interface of the computing device or a separate computing device, wherein different assistant content is generated for other users associated with other interaction cohorts of the plurality of interaction cohorts;
With the user’s peer group now assigned, “selected interchangeable analytics module 113, or another component within user experience system 111, [can] identify the user experiences that were commonly relevant to the peers of the user and can select the user experience components associated with those user experiences that were more relevant to the peers of the user. This up-to-date analysis simplifies the analysis of user data 116 while improving the likelihood that user experience customization engine 112 accurately selects user experience components that are likely to be relevant to the user, based on the user’s peers.” Paulus col. 23 line 63 to col. 24 line 8.
and causing the computing device or the separate computing device to render the assistant content at the interface,
“In one embodiment, the selected user experience components are presented to the user.” Paulus col. 25 ll. 32–33.
in furtherance of informing the user about one or more features employed by, or not employed by, the user during the prior interaction.
“In one embodiment, selected user experience component 117 can include, but is not limited to, data representing individualized user interview questions and/or suggestions and question and/or suggestion sequences; user interfaces; interface displays; sub-displays; images; music; backgrounds; avatars; highlighting mechanisms; icons; assistance resources; user recommendations; supplemental actions and recommendations; and/or any other component.” Paulus col. 18 ll. 51–64.
Claim 5
Paulus discloses the method of claim 1,
wherein the client interaction data indicates multiple different features of the automated assistant that the user has utilized via the computing device or the separate computing device.
As mentioned in the rejection of claim 1, the claimed client interaction data and automated assistant respectively correspond to “user data 116” and “the interactive software system” in Paulus. Paulus further discloses that “user data 116 includes abandonment indicator data 127,” and that abandonment indicator data 127 may include data describing “the force with which the user touches hardware associated with the interactive software system” and “the speed with which the user touches hardware associated with the interactive software system.” Paulus col. 17 ll. 57–60 and col. 18 ll. 5–23.
Claim 6
Paulus discloses the method of claim 1, further comprising:
subsequent to selecting the particular interaction cohort for classifying the user and/or the prior interaction: determining that the user is estimated to reduce engagement with the automated assistant at a particular time, or within a threshold duration of the particular time, wherein causing the computing device or the separate computing device to render the assistant content at the interface is performed at the particular time.
If “a ‘yes’ determination is made at user at risk of abandoning the interactive software system? operation 323,” then “process flow proceeds through to select one or more appropriate user experience components to transform the user experience provided through the interactive software system to a user experience customized to facilitate progress and prevent abandonment of the interactive software system operation 325.” Paulus col. 36 ll. 41–54.
Claim 11
Paulus discloses:
A method implemented by one or more processors, the method comprising:
“FIG. 1 illustrates a block diagram of a production environment 100 for providing an interactive software system with a user experience customized to facilitate progress and prevent abandonment of the interactive software system,” Paulus col. 15 ll. 4–13, and FIG. 3 illustrates a method 300 performed by the production environment 100. See Paulus col. 27 ll. 61–64.
However, since Paulus’s discussion of FIG. 1 provides greater (and fully encompassing) detail of the process performed by the production environment 100, this rejection will focus on that section of the reference, rather than the flowchart of FIG. 3. As a reminder, “if a prior art device, in its normal and usual operation, would necessarily perform the method claimed, then the method claimed will be considered to be anticipated by the prior art device.” MPEP § 2112.02.
determining, based on client interaction data, one or more prior interactions a user has had with an automated assistant that is accessible via a computing device, wherein the client interaction data is generated based on the one or more prior interactions between the user and features of the automated assistant;
Turning now to FIG. 1, an “analytics module selection engine 126 can [] use user data 116 to retrieve prior user data 151 from interactive software system support computing environment 150.” Paulus col. 23 ll. 8–12. The user data 116 includes data “from input devices 141 of user computing environment 140” via “user interface 115,” Paulus col. 17 ll. 45–56, and the prior user data 151 includes “user data obtained during or prior to a user's current interaction with an interactive software system.” Paulus col. 23 ll. 12–14.
The interactive software system falls within the scope of an automated assistant because it includes “any software system that provides an interactive user experience to its users.” Paulus col. 10 ll. 20–22. Moreover, several of Paulus’s examples for interactive software systems fall within an even narrower interpretation of “automated assistant,” such as software that assists users with tax preparation, financial management, personal management, accounting, and personal electronic data management. Paulus col. 10 ll. 20–58.
selecting, based on the one or more prior interactions between the user and the features of the automated assistant, interaction cohorts for classifying the user and/or the one or more prior interactions,
“[U]ser data 116 and/or prior user data 151 can be used by user experience customization engine 112 and/or selected interchangeable analytics module 113 to associate a user with a particular predetermined profile, e.g., with a set of criteria or with a group of users who share one or more characteristics in common with the user.” Paulus col. 23 ll. 51–57.
wherein each interaction cohort of the interaction cohorts is selected from a plurality of interaction cohorts
The group of users that user experience customization engine 112 identifies from the user data and/or prior user data are a “segment” of users of production environment 100, who are all “segmented” into different “segments” of the overall user population. See, e.g., Paulus col. 24 ll. 25–33; col. 30 ll. 61–67; and col. 31 ll. 19–40. For example, Paulus discloses that one way to “segment” the users into distinct groups or segments is to provide preexisting demographic “profiles 123,” and “associate a user with a particular one of profiles 123.” Paulus col. 21 ll. 17–35; see also col. 23 ll. 51–57 (relating the “group” back to the concept of associating the user with one of the plurality of profiles 123).
that vary according to a respective estimated level of experience a particular user has had with the automated assistant and/or a respective feature of the automated assistant;
The group chosen as the user’s peers include “other users who share or who have shared similar abandonment indicator data and/or who share similar user data characteristics.” Paulus col. 23 ll. 61–63. The commonly shared abandonment indicator data is indicative of a group’s level of experience with the interactive software system because it includes “historical user data,” “the speed with which the user touches hardware associated with the interactive software system,” “and/or data acquired from measuring the user's interactions with hardware associated with an interactive software system.” Paulus col. 18 ll. 5–46.
Each of those examples fall within the broadly recited scope of “estimated level of experience,” because, by definition, each interaction with the interactive software system involves the user experiencing the interactive software system, and the abandonment indicator data necessarily counts the number of such experiences.
subsequent to selecting the interaction cohorts for classifying the user and/or the one or more prior interactions:
generating first assistant content based on a first interaction cohort of the interaction cohorts, and second assistant content based on a second interaction cohort of the interaction cohorts, wherein the first assistant content and the second assistant content are generated for rendering at an interface of the computing device and/or a separate computing device;
The interactive software system generates “user experience component data” in order to generate each of the user experience components, which may include content that is meant to be rendered at the user’s computer, e.g., “content and content delivery messages, individualized user interview questions and question sequences, user interfaces, interface displays, sub-displays, images, side bar displays, pop-up displays, alarms, music, backgrounds, avatars, highlighting mechanisms, icons, assistance resources, user recommendations, supplemental actions and recommendations, and/or any other components.” Paulus col. 30 ll. 4–19.
With the user’s peers now assigned, “selected interchangeable analytics module 113, or another component within user experience system 111, [can] identify the user experiences that were commonly relevant to the peers of the user and can select the user experience components associated with those user experiences that were more relevant to the peers of the user. This up-to-date analysis simplifies the analysis of user data 116 while improving the likelihood that user experience customization engine 112 accurately selects user experience components that are likely to be relevant to the user, based on the user’s peers.” Paulus col. 23 line 63 to col. 24 line 8 (emphasis added). Note that the system may “select a combination of more than one selected user experience component 117,” Paulus col. 19 ll. 51–55, and hence, both a first and second user experience component (respectively corresponding to first and second assistant content).
and causing the computing device or the separate computing device to render the first assistant content and/or the second assistant at the interface,
“In one embodiment, the selected user experience components are presented to the user.” Paulus col. 25 ll. 32–33.
in furtherance of informing the user about one or more features employed by, or not employed by, the user during the prior interaction.
“In one embodiment, selected user experience component 117 can include, but is not limited to, data representing individualized user interview questions and/or suggestions and question and/or suggestion sequences; user interfaces; interface displays; sub-displays; images; music; backgrounds; avatars; highlighting mechanisms; icons; assistance resources; user recommendations; supplemental actions and recommendations; and/or any other component.” Paulus col. 18 ll. 51–64.
Claim 12
Paulus discloses the method of claim 11,
wherein the first assistant content characterizes a suggestion regarding a first feature of the automated assistant and the second assistant characterizes a different suggestion regarding a second feature of the automated assistant.
“In a tax preparation application associated with an interactive software system, for example, the user experience might provide extra help to the user by selecting user experience components to suggest where the user should go to find each data item needed to complete the application.” Paulus col. 37 ll. 41–45. Thus, for each data item, there will be a different user experience component to give the user specific advice for that particular data item.
Claim 14
Paulus discloses the method of claim 11, further comprising:
subsequent to selecting the interaction cohorts for classifying the user and/or the one or more prior interactions: determining that the user is estimated to reduce engagement with the first feature or the second feature at a particular time, or within a threshold duration of the particular time, wherein causing the computing device or the separate computing device to render the first assistant content and/or the second assistant at the interface is performed in response to determining that the user is estimated to reduce engagement with the first feature or the second feature.
The interactive software system groups questions that it asks of the user into different respective topics (e.g., for tax preparation software, “earned income credit, child tax credit, charitable contributions, cars and personal property”). Paulus col. 20 ll. 25–47. (In this rejection, each group is mapped to a respective one of the first and second features). Then, the system can evaluate each group’s different likelihood to cause the user to abandon the interactive software system, and provide a user experience that avoids “any difficult/unpleasant questions and/or suggestions” related to a specific topic. Paulus col. 38 ll. 12–47.
Claim 15
Paulus discloses the method of claim 11, further comprising:
subsequent to selecting the interaction cohorts for classifying the user and/or the one or more prior interactions: determining that the user is estimated to reduce engagement with the computing device or the separate computing device at a particular time, or within a threshold duration of the particular time, wherein causing the computing device or the separate computing device to render the first assistant content and/or the second assistant content at the interface is performed in response to determining that the user is estimated to reduce engagement with the computing device or the separate computing device.
If “a ‘yes’ determination is made at user at risk of abandoning the interactive software system? operation 323,” then “process flow proceeds through to select one or more appropriate user experience components to transform the user experience provided through the interactive software system to a user experience customized to facilitate progress and prevent abandonment of the interactive software system operation 325.” Paulus col. 36 ll. 41–54.
II. Almecija discloses claims 1–4, 17, and 20.
Claims 1–4, 17, and 20 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2018/0365025 A1 (“Almecija”).
Claim 1
Almecija discloses:
A method implemented by one or more processors, the method comprising:
“FIG. 6 shows a process 600 for determining a user experience level and adapting a user interface, according to an embodiment. Steps within process 600 may be executed by CPU/GPU 154 or processing abilities of various components within the components within the user experience system 104.” Almecija ¶ 38.
determining, based on client interaction data, a prior interaction a user has had with an automated assistant that is accessible via a computing device, wherein the client interaction data is generated based on the prior interaction between the user and the automated assistant;
“At step 606, user experience system 104 retrieves user UI interaction history and profile. Each user has a profile that is dynamically created. This profile includes their user interface actions, history, and preferences, as well as other information about them that may affect how a UI is adapted.” Almecija ¶ 42.
selecting, based on the prior interaction the user had with the automated assistant, a particular interaction cohort for classifying the user and/or the prior interaction,
“At step 608, user experience system 104, through user experience learning component 142 in an embodiment, applies a learning component to assign and/or update one or more user groupings.” Almecija ¶ 43.
wherein the particular interaction cohort is selected from a plurality of interaction cohorts that vary according to an estimated level of experience a particular user has had with the automated assistant and/or a feature of the automated assistant;
“User experience learning component 142 learns and groups individuals across all of the various software applications and over time. Thus, it learns and develops an understanding of overall usage of the UI to group certain patterns and usages with similar patterns and usages. This is discussed further in relation to FIG. 5 and FIG. 7.” Almecija ¶ 43. Accordingly, looking ahead to FIG. 7, Almecija further explains that “[t]he system registers buttons clicked, screens interacted (mouse or touch interactions with the screens in an embodiment), and number of monitors of interaction for the user,” Almecija ¶ 86, and then, “[g]rouping 704 layer takes in factors about the user and/or situation and assesses the strength of the factors to group the information as related to similar situations and/or users.” Almecija ¶ 90.
subsequent to selecting the particular interaction cohort for classifying the user and/or the prior interaction:
As shown in FIG. 6, steps 612 and 614 (discussed next) are performed subsequent to steps 606 and 608 (discussed earlier). See Almecija FIG. 6.
generating, based on the particular interaction cohort selected for the user and/or the prior interaction, assistant content for rendering at an interface of the computing device or a separate computing device, wherein different assistant content is generated for other users associated with other interaction cohorts of the plurality of interaction cohorts;
“At step 612, user experience system 104, through UI adaptive component 148 in an embodiment, adapts a user interface per user experience level and/or assigned grouping. Adapting the user interface can mean re-sizing the screen, changing the layout, reducing or adding buttons, changing menus, altering what content is shown, changing fonts, changing paradigms (e.g. visual to audible), changing icons, re-arranging UI assets, and more.” Almecija ¶ 46. An additional example of adapting the user interface includes “the dynamic providing of hints to help the user navigate or otherwise use the UI.” Almecija ¶¶ 64–66.
and causing the computing device or the separate computing device to render the assistant content at the interface,
“At step 614, user experience system 104, through UI output component 150 in an embodiment, outputs the adapted UI to user IO 102.” Almecija ¶ 47.
in furtherance of informing the user about one or more features employed by, or not employed by, the user during the prior interaction.
“At this point a user has an improved user interface experience based on the user experience system adapting the user interface particularly to the user, the user's hardware, and the user's situation. The whole process 600 can take place almost instantaneously so that the user sees the UI adapt in real-time.” Almecija ¶ 47.
Claim 2
Almecija discloses the method of claim 1, further comprising:
subsequent to causing the computing device or the separate computing device to render the assistant content at the interface: determining, based on subsequent client interaction data, a subsequent interaction, or lack of interaction, between the user with the automated assistant, wherein the subsequent client interaction data is generated based on the subsequent interaction, or lack of interaction, between the user and the automated assistant;
After adapting the UI in accordance with the method described in the rejection of claim 1, “[t]he system can ask the user what their UI preferences are and if certain adapted UIs have been helpful,” Almecija ¶ 87, and therefore receive a response to those questions.
and causing one or more trained machine learning models to be further trained based on the subsequent interaction, or lack of interaction, between the user and the assistant content,
“This feedback can be put under historical usage factors when trying to understand how to best learn what the best adapted UI is to output in the current session.” Almecija ¶ 87. Note that these “historical usage factors” refer to nodes in a “first node layer in [a] neural network 700” shown in FIG. 7. Almecija ¶ 85.
wherein generating the assistant content for rendering at the interface involves utilizing the one or more trained machine learning models.
Step 608 of method 600 is performed using the neural network 700, including the first node layer 702. See Almecija ¶¶ 43 and 85.
Claim 3
Almecija discloses the method of claim 2, further comprising:
subsequent to causing the computing device or the separate computing device to render the assistant content at the interface: selecting, based on the subsequent interaction of the user, a separate interaction cohort from the plurality of interaction cohorts, wherein the separate interaction cohort corresponds to a more experienced user cohort relative to the particular interaction cohort.
“In an embodiment, the user experience system can provide an adapted user interface for beginner users such as FIG. 4 and then adapt for experienced users such as in FIG. 3.” Almecija ¶ 75. “In an example, the system could display an average of: four buttons if a user has had less than 20 sessions using the software application, eight buttons if a user has between 20 and 40 sessions using the software application, and twelve buttons if a user has more than 40 sessions using the software application.” Almecija ¶ 74.
Claim 4
Almecija discloses the method of claim 2, further comprising:
subsequent to causing the computing device or the separate computing device to render the assistant content at the interface: selecting, based on the subsequent interaction of the user, a separate interaction cohort from the plurality of interaction cohorts, wherein the separate interaction cohort corresponds to a less experienced user cohort relative to the particular interaction cohort.
Almecija’s group assignments are task/context specific, so, a user who is initially assigned to an experienced grouping may be moved to a more novice grouping in response to the system 104 detecting that the user’s interactions correspond to a task with which the user is inexperienced. See Almecija ¶ 90. For example, “[w]hen a user is considered advanced because they have used a software application hundreds of times but only use it for completing one function, or one task comprised of multiple functions that have been automated as discussed below, the user experience system may present a very simple user interface of only one button and one imaging window.” Almecija ¶ 77.
Claim 17
Almecija discloses:
A method implemented by one or more processors, the method comprising:
“FIG. 6 shows a process 600 for determining a user experience level and adapting a user interface, according to an embodiment. Steps within process 600 may be executed by CPU/GPU 154 or processing abilities of various components within the components within the user experience system 104.” Almecija ¶ 38.
determining, based on client interaction data, a prior interaction a user has had with an automated assistant that is accessible via a computing device, wherein the client interaction data is generated based on the prior interaction between the user and the automated assistant;
“At step 606, user experience system 104 retrieves user UI interaction history and profile. Each user has a profile that is dynamically created. This profile includes their user interface actions, history, and preferences, as well as other information about them that may affect how a UI is adapted.” Almecija ¶ 42.
selecting, based on the prior interaction the user had with the automated assistant, a first interaction cohort for classifying the user and/or the prior interaction,
“At step 608, user experience system 104, through user experience learning component 142 in an embodiment, applies a learning component to assign and/or update one or more user groupings.” Almecija ¶ 43.
wherein the first interaction cohort is selected from a plurality of interaction cohorts that vary according to an estimated level of experience a particular user has had with: the automated assistant and/or a feature of the automated assistant;
“User experience learning component 142 learns and groups individuals across all of the various software applications and over time. Thus, it learns and develops an understanding of overall usage of the UI to group certain patterns and usages with similar patterns and usages. This is discussed further in relation to FIG. 5 and FIG. 7.” Almecija ¶ 43. Accordingly, looking ahead to FIG. 7, Almecija further explains that “[t]he system registers buttons clicked, screens interacted (mouse or touch interactions with the screens in an embodiment), and number of monitors of interaction for the user,” Almecija ¶ 86, and then, “[g]rouping 704 layer takes in factors about the user and/or situation and assesses the strength of the factors to group the information as related to similar situations and/or users.” Almecija ¶ 90.
determining, based on additional client interaction data, a separate prior interaction the user had with the automated assistant, wherein the additional interaction data is generated based on the separate prior interaction between the user and the automated assistant;
Almecija provides two different disclosures that each separately anticipate this claim element. The first disclosure is that the UI interaction history and profile data retrieved in step 606 comprises multiple UI interactions, and even multiple categories of UI interactions from the user’s history. See Almecija ¶ 42 and FIG. 7. In this version of the rejection, the claimed client interaction data and additional client interaction data respectively correspond to first and second portions of the multiple UI interactions. For example, in FIG. 7, “buttons clicked” is client interaction data, while “help menu accesses” is an additional client interaction data.
A second, different way Almecija anticipates this claim element is with its disclosure that the user experience system 104 retrieves user UI interaction history and profile to assign the groupings a first time in steps 606–608, but also loops back to repeat step 606 (see FIG. 7) in order to “update [the] one or more user groupings.” Almecija ¶¶ 41–42. In this version of the rejection, the client interaction data corresponds to the Almecija’s initial UI interaction history, while the additional client interaction data corresponds to Almecija’s updated UI interaction history.
Notably, in both versions of this rejection, the two respective sets of UI interactions are “generated based on [a] separate prior interaction” as claimed.
selecting, based on the separate prior interaction the user had with the automated assistant, a second interaction cohort from the plurality of interaction cohorts for classifying the user and/or separate prior interaction;
“At step 608, user experience system 104, through user experience learning component 142 in an embodiment, applies a learning component to assign and/or update one or more user groupings.” Almecija ¶ 43.
Note that this teaching applies differently in the two different versions of the rejection mentioned above. For the version where the claimed first client interaction data and additional client interaction data correspond to different interactions and/or categories over the same period, the claimed second interaction action cohort corresponds to any of the additional user groupings 704 that the user experience system 104 assigns based on the second portion of UI interactions. For example, consider FIG. 7: based on “buttons clicked” (the client interaction data) and “help menu access” (the additional client interaction data), user experience system 104 is able to assign both the “single time user single task” grouping (the first interaction cohort) and the “tech savvy in general” grouping (the second interaction cohort), from among the plurality of five groupings 704 shown in the figure.
For the version where the claimed first client interaction data and additional client interaction data respectively correspond to the initial set of UI interaction data and the updated set of UI interaction data, the claimed first and second interaction cohorts likewise correspond to the initially assigned groupings and the subsequently updated groupings. See Almecija ¶ 43 (“At step 608, user experience system 104, through user experience learning component 142 in an embodiment, applies a learning component to assign and/or update one or more user groupings.”)
subsequent to selecting the first interaction cohort and the second interaction cohort:
As shown in FIG. 6, steps 612 and 614 (discussed next) are performed subsequent to steps 606 and 608 (discussed earlier). See Almecija FIG. 6.
generating, based on the first interaction cohort and/or the second interaction cohort, assistant content for rendering at an interface of the computing device or a separate computing device;
“At step 612, user experience system 104, through UI adaptive component 148 in an embodiment, adapts a user interface per user experience level and/or assigned grouping. Adapting the user interface can mean re-sizing the screen, changing the layout, reducing or adding buttons, changing menus, altering what content is shown, changing fonts, changing paradigms (e.g. visual to audible), changing icons, re-arranging UI assets, and more.” Almecija ¶ 46. An additional example of adapting the user interface includes “the dynamic providing of hints to help the user navigate or otherwise use the UI.” Almecija ¶¶ 64–66.
and causing the computing device or the separate computing device to render the assistant content at the interface,
“At step 614, user experience system 104, through UI output component 150 in an embodiment, outputs the adapted UI to user IO 102.” Almecija ¶ 47.
in furtherance of informing the user about one or more features employed by, or not employed by, the user during the prior interaction and/or the separate prior interaction.
“At this point a user has an improved user interface experience based on the user experience system adapting the user interface particularly to the user, the user's hardware, and the user's situation. The whole process 600 can take place almost instantaneously so that the user sees the UI adapt in real-time.” Almecija ¶ 47.
Claim 20
Almecija discloses the method of claim 17,
wherein the first interaction cohort corresponds to users who have had more interactions with the feature of the automated assistant than another user who is assigned to the second interaction cohort.
As shown in FIG. 7, based on the UI interaction data, user experience system 104 will determine the probability that a user should be assigned to the “long time user typical task” grouping 704, versus the probability that the user should be assigned to the grouping 704 of “non-typical task/situation” for “a task they have not done before and are not likely to do again.” Almecija ¶ 90.
Claim Rejections – 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
I. Paulus and Wolverton teach claims 7 and 8.
Claims 7 and 8 are rejected under 35 U.S.C. § 103 as being unpatentable over Paulus as applied to claim 6 above, and further in view of U.S. Patent Application Publication No. 2014/0136187 A1 (“Wolverton”).
Claim 7
Paulus teaches the method of claim 6,
The abandonment indicator, which helps determine if the user is “at risk of abandoning the interactive software system,” Paulus col. 36 ll. 41–54, may include data describing “the force with which the user touches hardware associated with the interactive software system” and “the speed with which the user touches hardware associated with the interactive software system.” Paulus col. 17 ll. 57–60 and col. 18 ll. 5–23.
Paulus does not explicitly disclose that its interactive software system includes a vehicle computing device that controls a vehicle.
Wolverton, like Paulus, teaches a method for assisting users with software on a computer, but Wolverton further teaches:
the computing device is a vehicle computing device that is directly attached to, and controls, a vehicle
“Referring to FIG. 1, a vehicle personal assistant 112 is embodied in a computing system 100 as computer software, hardware, firmware, or a combination thereof,” and more particularly, “may be embodied as an in-vehicle computing system (e.g., an ‘in-dash’ system).” Wolverton ¶ 36; see also ¶¶ 125 and 135. “The vehicle personal assistant 112 may also be configured to disable or otherwise limit entertainment options within the vehicle 104 while the vehicle 104 is in motion.” Wolverton ¶ 141.
wherein determining that the user is estimated to reduce engagement with the automated assistant is based on available interaction data indicating current or past engagement of the user with the vehicle computing device and/or the automated assistant.
The personal assistant 112 “may analyze the duration of the pause between inputs 102 and determine an appropriate response thereto. For pauses of longer duration, the method 500 may simply abandon the dialog and wait for the user to provide new input 102 at a later time. For pauses of shorter duration, the method 500 may issue a reminder to the user or prompt the user to continue the thought.” Wolverton ¶ 118.
Additionally, the personal assistant 12 may have an input classifier 134, which “analyzes the computer-readable representations of the inputs 102 as prepared by the input recognizer/interpreter 130, and classifies the inputs 102 according to rules or templates that may be stored in the vehicle-specific conversation model 132 or the vehicle context model 116,” to anticipate a situation where “a vehicle driver may simply give up on an inquiry if an answer is not received within a reasonable amount of time.” Wolverton ¶ 52.
Claim 8
Paulus and Wolverton teach the method of claim 7,
wherein the available interaction data indicates that, at the particular time, the user has ceased controlling the vehicle within a threshold duration of time, and/or the vehicle is parked or stopped.
“In some embodiments, the method 200 determines how to prompt the user for clarification based on the current vehicle context. For example, if the vehicle 104 is parked, the method 200 may prompt the user via spoken natural language, text, graphic, and/or video.” Wolverton ¶ 97.
II. Paulus and Biswas teach claims 9 and 10.
Claims 9 and 10 are rejected under 35 U.S.C. § 103 as being unpatentable over Paulus as applied to claim 6 above, and further in view of U.S. Patent Application Publication No. 2022/0300392 A1 (“Biswas”).
Claim 9
Paulus teaches the method of claim 6,
wherein the automated assistant is an applicationdata indicating current or past engagement of the user with the application.
The abandonment indicator, which helps determine if the user is “at risk of abandoning the interactive software system,” Paulus col. 36 ll. 41–54, may include data describing “the force with which the user touches hardware associated with the interactive software system” and “the speed with which the user touches hardware associated with the interactive software system.” Paulus col. 17 ll. 57–60 and col. 18 ll. 5–23.
Paulus does not explicitly disclose that its interactive software system is an application “that facilitates internet searching.”
Biswas, however, teaches a similar method involving predicting how a user will react to an event in a software application, and mitigating the reaction with remedial action, see Biswas Abstract, and further teaches:
the automated assistant is an application that facilitates internet searching,
As shown in FIG. 1, Biswas’s method collects user interaction data 102 to make predictions about a user’s interactions with a software product. User interaction data 102 includes “link selection activity,” from links that are “included in a set of search results displayed by the web browser.” Biswas ¶ 29.
and wherein determining that the user is estimated to reduce engagement with the automated assistant is based on available interaction data indicating current or past engagement of the user with the application.
“In accordance with embodiments of the present disclosure, the user interaction data 102 can be current data (e.g., activity data associated with a current activity, a number of activities in a current session, etc.), which can be obtained and used with the user reaction prediction models 112 to detect a user reaction, or reactions, 114 (e.g., a current user reaction, or reactions).” Biswas ¶ 23.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply Paulus’s user abandonment system to the field of applications that facilitate internet searching, as taught by Biswas (which has the same goal of predicting and mitigating poor user reactions to the software). One would have been motivated to follow Biswas’s lead of applying this type of help system to internet searching software because “[a] negative user experience with a website can result in the user limiting a current visit and any future visits to the website.” Biswas ¶ 1.
Claim 10
Paulus and Biswas teach the method of claim 9,
wherein the available interaction data indicates that, at the particular time, the application has, or has not, received an input from the user within a threshold duration of time.
The abandonment indicator includes data describing “the speed with which the user touches hardware associated with the interactive software system.” Paulus col. 17 ll. 57–60 and col. 18 ll. 5–23.
III. Paulus and Aggarwal teach claims 13 and 16.
Claims 13 and 16 are rejected under 35 U.S.C. § 103 as being unpatentable over Paulus as applied to claims 12 and 15 above, and further in view of U.S. Patent Application Publication No. 2022/0100465 A1 (“Aggarwal”).
Claim 13
Paulus teaches the method of claim 12, but not where the first feature corresponds to controlling, via the automated assistant, a vehicle that the computing device is attached, and the second feature corresponds to controlling, via the automated assistant, a separate application from the automated assistant.
Aggarwal, however, teaches a method that is similar to that of the parent claims (e.g., providing help for a virtual assistant with respect to multiple features),
wherein the first feature corresponds to controlling, via the automated assistant, a vehicle that the computing device is attached,
“In some examples, when a user interacts with a particular type of computing device 210 (e.g., a mobile phone, vehicle head unit, etc.), assistant module 222 [or 122] may assign a higher relevancy score to actions which the particular type of computing device is configured to perform.” Aggarwal ¶ 84. Actions that assistant module 222 is capable of performing include “start[ing] the user’s vehicle.” Aggarwal ¶¶ 41 and 114.
and the second feature corresponds to controlling, via the automated assistant, a separate application from the automated assistant.
“Assistant modules 122 may rely on other applications (e.g., third-party applications), services, or other devices (e.g., televisions, automobiles, watches, home automation systems, entertainment systems, etc.) to perform actions or services for an individual.” Aggarwal ¶ 27; see also ¶ 59. The performance of these commands using other, third-party applications are among the actions that the assistant modules track in order to provide suggestions for executing those commands, when they are relevant. See, e.g., Aggarwal ¶¶ 80, 89, and 92.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply Paulus’s overall user help facility to an assistant that performs vehicle commands and commands from separate applications, as taught by Aggarwal.
Such a combination would have been obvious because it involves nothing more than the use of a known technique to improve similar devices, methods, or products in the same way. See Intel Corp. v. PACT XPP Schweiz AG, 61 F.4th 1373, 1380-81, 2023 USPQ2d 297 (Fed. Cir. 2023) citing KSR Int’l Co. v. Teleflex, Inc. 550 U.S. 398, 417 (2007).
Consistent with the guidance for this rationale in MPEP § 2143 (subsection (I.)(C.)), the relevant findings of fact for this conclusion are supported by a preponderance of the evidence, as follows:
(1) The prior art contained a “base” device, method, and product upon which the claimed invention can be seen as an “improvement.” The evidence for this finding includes all of the findings from the rejections of claims 11 and 12, which provide a correspondence between the elements that claim 13 incorporates from its parent claims by reference to the Paulus prior art reference.
(2) The prior art contained a “comparable” device, method, and product that is not the same as the base device, but that has been improved in the same way as the claimed invention. The evidence for this finding is provided above, via the citations to Aggarwal’s disclosure.
(3) One of ordinary skill in the art could have applied the known “improvement” technique in the same way to the “base” device, method, or product, and the results would have been predictable to one of ordinary skill in the art. The evidence for this finding is that both prior art references already disclose each of the elements of the claimed invention, with Aggarwal directly instructing the skilled artisan to employ the same in a vehicle, and with respect to third party applications.
Therefore, based on the above findings, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply Paulus’s overall user help facility to an assistant that performs vehicle commands and commands from separate applications, as taught by Aggarwal.
Claim 16
Paulus teaches the method of claim 15,
wherein the interface is a display interface that is integral
“[T]he user experience customized to facilitate progress and prevent abandonment is provided to the user via a user display screen displayed on a user computing system accessible by the user.” Paulus clm. 14.
and the first assistant content and/or the second assistant content are rendered at the display interface simultaneous to other assistant content being rendered at the display interface.
“In one embodiment, the selected user experience components are presented to the user.” Paulus col. 25 ll. 32–33. “In one embodiment, selected user experience component 117 can include, but is not limited to, data representing individualized user interview questions and/or suggestions and question and/or suggestion sequences; user interfaces; interface displays; sub-displays; images; music; backgrounds; avatars; highlighting mechanisms; icons; assistance resources; user recommendations; supplemental actions and recommendations; and/or any other component.” Paulus col. 18 ll. 51–64.
Paulus does not appear to explicitly disclose the display interface being integral “to a vehicle.”
Paulus, however, teaches a similar method, but wherein:
the interface is a display interface that is integral to a vehicle,
In some embodiments, “computing device 210 includes a vehicle head unit.” Aggarwal ¶ 85.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply Paulus’s overall user help facility to an assistant that performs vehicle commands and commands from separate applications, as taught by Aggarwal.
Such a combination would have been obvious because it involves nothing more than the use of a known technique to improve similar devices, methods, or products in the same way. See Intel Corp. v. PACT XPP Schweiz AG, 61 F.4th 1373, 1380-81, 2023 USPQ2d 297 (Fed. Cir. 2023) citing KSR Int’l Co. v. Teleflex, Inc. 550 U.S. 398, 417 (2007).
Consistent with the guidance for this rationale in MPEP § 2143 (subsection (I.)(C.)), the relevant findings of fact for this conclusion are supported by a preponderance of the evidence, as follows:
(1) The prior art contained a “base” device, method, and product upon which the claimed invention can be seen as an “improvement.” The evidence for this finding includes all of the findings from the rejections of claims 11 and 15, which provide a correspondence between the elements that claim 16 incorporates from its parent claims by reference to the Paulus prior art reference.
(2) The prior art contained a “comparable” device, method, and product that is not the same as the base device, but that has been improved in the same way as the claimed invention. The evidence for this finding is provided above, via the citations to Aggarwal’s disclosure.
(3) One of ordinary skill in the art could have applied the known “improvement” technique in the same way to the “base” device, method, or product, and the results would have been predictable to one of ordinary skill in the art. The evidence for this finding is that both prior art references already disclose each of the elements of the claimed invention, with Aggarwal directly instructing the skilled artisan to employ the same in a vehicle, and with respect to third party applications.
Therefore, based on the above findings, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply Paulus’s overall user help facility to an assistant that performs vehicle commands and commands from separate applications, as taught by Aggarwal.
IV. Almecija and Matsubara teach claim 18.
Claim 18 is rejected under 35 U.S.C. § 103 as being unpatentable over Almecija as applied to claim 17 above, and further in view of U.S. Patent Application Publication No. 2005/0125233 A1 (“Matsubara”).
Claim 18
Almecija teaches the method of claim 17, but does not say whether its system is integral to and/or controls a vehicle, nor does it provide any hints about invoking an automated assistant.
Matsubara, however, teaches a computing device (“main device 1” in FIG. 1),
wherein the computing device and the interface are integral to a vehicle
Main device 1 is “a vehicle mounted control apparatus.” Matsubara ¶ 20.
and the one or more features involve controlling the vehicle using the automated assistant,
Main device 1 has an “interface 10” to perform a function of “relaying the operational status signals of electronic devices of the car and control signals to these electronic devices, for example, a control device of an air conditioner, head lights, and sensors for detecting the on-off states of a wiper and the head lights (all of which are not shown), between the control section 2 of the apparatus and these electronic devices.” Matsubara ¶ 22.
and the assistant content includes a graphical indication of a vehicle button for invoking the automated assistant.
As shown in FIG. 6B, when the user needs help understanding how to use main device 1 to issue commands, “the control section 2 reads from the memory 3 a method of uttering a voice command relating to the selected command and displays it on the display device 6,” e.g., a prompt that instructs the user to operate a steering wheel button and to “[u]tter the names of the prefecture and the name of facility consecutively.” Matsubara ¶ 41.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to extend the principles of Almecija’s user experience system 104 to the field of vehicles and vehicle controls, as directly suggested by Matsubara. One would have been motivated to extend these principles to vehicles because, “when a command by a voice (hereinafter referred to as ‘voice command’) is entered to a car navigation apparatus, depending on user's vocal conditions (for example, the level of voice uttered by a user), the car navigation apparatus cannot recognize the voice in some cases,” and users have difficulty understanding why the voice recognition failed, Matsubara ¶ 7, necessitating a solution “that can inform a user about the state of recognition of a voice command uttered by the user in such a way that the user can easily understand the state of recognition.” Matsubara ¶ 8.
V. Almecija and Smith teach claim 19.
Claim 19 is rejected under 35 U.S.C. § 103 as being unpatentable over Almecija as applied to claim 17 above, and further in view of U.S. Patent Application Publication No. 2008/0300884 A1 (“Smith”).
Claim 19
Almecija teaches the method of claim 17, but since the rejection of claim 17 maps the claimed “automated assistant” to the same software that is also providing the assistance for using the software, Almecija necessarily does not anticipate an arrangement where “one or more features involve controlling a separate application via the automated assistant.”
Additionally, while Almecija does disclose a mechanism for switching its user interface paradigm from touch (or mouse) to voice, see Almecija ¶¶ 101, 102, and 106, and even acknowledges that “[s]ome users prefer or only can use . . . voice input and shun[] keyboard input,” Almecija ¶ 3, Almecija’s hinting feature does not appear to include “natural language content specifying a spoken utterance to provide to the automated assistant for controlling the separate application.”
Smith, however, teaches a method that renders assistant content at the interface, in furtherance of informing the user about one or more features employed by, or not employed by, the user during the prior interaction and/or the separate prior interaction,
wherein the one or more features involve controlling a separate application via the automated assistant,
As shown in FIG. 1, Smith teaches a general purpose computer 104 with an “[a]udio command interface 108 [that] determines whether voice data corresponds to an audio command, including “to access and control a native application 112.” Smith ¶ 19. Native application 112 is one of a plurality of applications on the general purpose computer 104 that are distinct from the audio command interface 108; audio command interface 108 must access their respective APIs to convert the voice commands into machine-usable commands for whichever of the native applications 112 are to be controlled. See Smith ¶ 21.
and the assistant content includes natural language content specifying a spoken utterance to provide to the automated assistant for controlling the separate application.
“Audio command interface 108 can also provide a list of available commands to the person using mobile device 102, such as by presenting prompts to the person, by allowing the person to request a list of available audio commands, or in other suitable manners.” Smith ¶ 19.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to extend Almecija’s user experience system 104 to work with additional applications, rather than embedding the feature directly within a single software application, considering such a suggestion was already known, via Smith’s disclosure. One would have been motivated to follow Smith’s suggestion to extend Almecija’s user experience system to additional applications because systems for using voice commands that “are application-specific” (as is the case with Almecija’s system) are inconvenient, because they “require the person to have multiple mobile devices and/or systems to remotely access and control the different applications at a computer.” Smith ¶ 2.
cONCLUSION
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Justin R. Blaufeld whose telephone number is (571)272-4372. The examiner can normally be reached M-F 9:00am - 4:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James K Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Justin R. Blaufeld
Primary Examiner
Art Unit 2151
/Justin R. Blaufeld/Primary Examiner, Art Unit 2151