DETAILED ACTION
This Office Action is responsive to the Applicant’s submission, filed on July 18, 2025, amending claims 1 and 11. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on July 18, 2025 has been considered by the Examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-6, 8, 11, 13-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2011/0145863 to Alsina et al. (“Alsina”), over U.S. Patent Application Publication No. 2010/0157989 to Krzyzanowski et al. (“Krzyzanowski”), over U.S. Patent Application Publication No. 2007/0157240 to Walker (“Walker”), over U.S. Patent Application Publication No. 2008/0301732 to Archer et al. (“Archer”), and also over U.S. Patent No. 8,874,144 to Liu et al. (“Liu”).
Regarding claim 1, Alsina teaches providing a graphical user interface (“GUI”) on an accessory device, wherein the GUI can be defined and managed by a portable media device (“PMD”) rather than the accessory device (see e.g. paragraph 0007). Alsina particularly discloses that the accessory device can be an in-vehicle media control unit that can be installed in a dashboard of a vehicle (see e.g. paragraph 0027). Like claimed, Alsina teaches:
while a vehicle is in motion at a first time, displaying, via a vehicle content interface, a user interface, the user interface comprising one or more audio user interface elements (see e.g. paragraph 0163 and FIG. 9B: Alsina discloses that the GUI defined by the PMD for display by the accessory device can comprise “Albums,” “Songs,” and “Audiobooks” menu items. The “Albums,” “Songs,” and “Audiobooks” menu items are each considered an audio user interface element. Alsina discloses that the GUI of FIG. 9B omits or grays-out display of a “Videos” menu item – see e.g. paragraph 0163 – and discloses that such an omission of video functionality occurs when the vehicle is in motion – see e.g. paragraphs 0067, 0081-0082, 0118, and 0142. Accordingly, it is apparent that the GUI of FIG. 9B, which comprises audio user interface elements such as the “Songs” and “Audiobooks” menu items, is displayed by the accessory device, i.e. on a “vehicle content interface,” while the vehicle is in motion at a first time.); and
based at least in part on determining that the vehicle is not in motion at a second time, displaying, via the vehicle content interface, the user interface comprising an option to play on the user interface recorded video content (see e.g. paragraphs 0081-0082, 0118 and 0142: Alsina teaches that, responsive to receiving an indication that the vehicle is not in motion, e.g. if the vehicle placed in park, the PMD can allow video playback functionality to become available. Accordingly, it is apparent that when the vehicle is determined to be in park at a second time, i.e. not to be in motion, the GUI of FIG. 9B is displayed by the accessory device on the vehicle content interface, but with the “Videos” menu item also included therein. The “Videos” item is considered an option to play on the user interface recorded video content.).
Accordingly, Alsina teaches a method similar to that of claim 1, but does not explicitly disclose that the one or more audio user interface elements are tabs, each of which corresponds to a plurality of content identifiers, wherein selection of one of the audio user interface tabs results in a display of a plurality of audio content identifiers corresponding to the selected audio user interface tab, as is required by claim 1. Moreover, Alsina does not teach detecting a request to record video content using a first device, wherein the request comprises an option to make the recorded video content available for viewing in the vehicle, and whereby the option to play on the user interface the recorded video content is particularly displayed for this recorded video content that was recorded with the option to make the recorded video content available for viewing in the vehicle, as is further required by claim 1. Alsina also does not teach, prior to selection of the option to play on the user interface the recorded video content: (i) determining that the recorded video content comprises content related to a non-geographic attribute, and (ii) based at least in part on determining that the vehicle is in a geographic location in proximity to a landmark associated with the non-geographic attribute, displaying, via the vehicle content interface, an advertisement associated with the non-geographic attribute, as is further required by claim 1.
Krzyzanowski nevertheless describes a user interface for a media application, wherein the interface comprises one or more audio user interface tabs (e.g. a “music” tab and a “podcasts” tab), each of which corresponds to a plurality of content identifiers (e.g. songs, podcasts) and wherein selection of one of the audio user interface tabs results in a display of a plurality of content identifiers corresponding to the selected audio user interface tab (see e.g. paragraphs 0250-0253 and 0259-0261, and FIGS. 38, 39 and 44). Krzyzanowski further discloses that the interface for the media application also comprises a video user interface tab, wherein selection of the video user interface tab results in a display of a plurality of video content identifiers corresponding to the selected video user interface tab (see e.g. paragraphs 0250 and 0254-0255, and FIG. 41).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina and Krzyzanowski before him prior to the effective filing date of the claimed invention, to modify the interface taught by Alsina such that the audio user interface elements and the video user interface element (i.e. the “Videos” menu item) are presented as tabs like taught by Krzyzanowski, each of which corresponds to a plurality of content identifiers, and wherein selection of one of the audio user interface tabs results in a display of a plurality of audio content identifiers corresponding to the selected audio user interface tab, and wherein selection of the video user interface tab (when displayed) results in a display of a plurality of video content identifiers corresponding to the selected video user interface tab. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user to efficiently access audio or video content for playback, as is evident from Krzyzanowski (see e.g. paragraphs 0250-0255, and FIG. 41, and FIGS. 38, 39, 41 and 44).
Walker generally teaches enabling a user to select video programming for recording using a user equipment device located in a home network, wherein the user can configure the delivery of recorded content to different user equipment devices in the home network (see e.g. paragraph 0007). Regarding the claimed invention, Walker particularly teaches detecting a request to record video content using a first device, wherein the request comprises an option to make the recorded video content available for viewing in a vehicle (see e.g. paragraph 0119 and FIG. 8a: Walker describes a user interface presented by a first device that enables a user to request to record a video program. Walker discloses that the interface comprises an option to share the recording with other user equipment devices on the home network – see e.g. paragraph 0124 and FIG. 8a. Walker further discloses that the other devices can comprise a vehicle entertainment device – see e.g. paragraph 0050. Accordingly, the request can comprise an option to make the recorded video content available for viewing on other devices, including in a vehicle. Walker also discloses that the user can set particular options for delivering the recorded content to the vehicle – see e.g. paragraphs 0050, 0126 and 0173.). It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski and Walker before him prior to the effective filing date of the claimed invention, to modify the method taught by Alsina and Krzyzanowski so as to detect a request to record video content using a first device, wherein the request comprises an option to make the recorded video content available for viewing in the vehicle like taught by Walker, and whereby the option to play on the user interface the recorded video content would particularly be for this recorded video content that was recorded with the option to make the recorded video content available for viewing in the vehicle. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user to efficiently record video for access on a plurality of different devices, as is suggested by Walker (see e.g. paragraphs 0003-0007).
Archer generally describes methods and systems for the personalization of interactive media guidance applications, e.g. by providing targeted advertisements to a user, based on recording-related actions (see e.g. paragraphs 0008-0010). Particularly, regarding the claimed invention, Archer teaches determining, prior to receiving a user selection of an option to play recorded video content, that the recorded video content comprises content related to a non-geographical attribute, and then displaying an advertisement associated with the non-geographical attribute (see e.g. paragraphs 0068 and 0073, and FIG. 8: Archer describes a media guidance application that displays, in addition to selectable media items, a targeted advertisement to the user. Archer discloses that the targeted advertisement is selected for display in the media guidance application based on a user profile, which includes the user’s content preferences determined from recording-related actions of the user – see e.g. paragraphs 0009-0010, 0064 and 0073. The recording related actions that indicate the user’s content preferences include recording a media program – see e.g. paragraphs 0009 and 0030. Accordingly, it is apparent that the user can request to record a media program, whereby prior to the user later requesting to actually view the recorded media program, the system described by Archer: (i) determines that the recorded media program comprises content related to a non-geographical attribute, i.e. indicates user content preference; and (ii) selects and displays, in the media guidance application, an advertisement associated at least in part with the non-geographical attribute).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker and Archer before the effective filing date of the claimed invention, to modify the method taught by Alsina, Krzyzanowski and Walker so as to further include, prior to selection of the option to play on the user interface the recorded video content: (i) determining that the recorded video content comprises content related to a non-geographical attribute; and (ii) displaying (i.e. via the vehicle content interface) an advertisement associated with the non-geographical attribute, as is taught by Archer. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user interface to be further personalized to the user, as is taught by Archer (see e.g. paragraphs 0006-0007). Accordingly, Alsina, Krzyzanowski, Walker and Archer teach a method similar to that of claim 1, which includes displaying an advertisement associated with a non-geographical attribute, but do not explicitly disclose that the advertisement is displayed based at least in part on determining that the vehicle is in a geographical location in proximity to a landmark associated with the non-geographical attribute, as is further required by claim 1.
Liu nevertheless generally teaches determining that user-requested content comprises content related to a non-geographic attribute (e.g. a topic and/or user interest) and, based at least in part on determining that the user is in a geographical location proximate to a landmark (e.g. a restaurant or other business) associated with the non-geographic attribute, displaying via the mobile device interface an advertisement associated with the non-geographical attribute (see e.g. column 2, line 53 – column 3, line 39; column 7, lines 8-55; column 8, lines 9-17; column 10, line 38 – column 11, line 16; column 12, lines 39-43; column 13, lines 17-60; column 14, lines 26-35; and column 16, lines 11-39).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker, Archer and Liu before the effective filing date of the claimed invention, to modify the method taught by Alsina, Krzyzanowski, Walker and Archer such that the advertisement is further displayed (i.e. via the vehicle content interface) based at least in part on determining that the user (and thus the vehicle) is in a geographical location proximate to a landmark associated with the non-geographic attribute, as is taught by Liu. It would have been advantageous to one of ordinary skill to utilize such a combination because it would provide for advertisements that are more likely to be selected by the user, as is evident from Liu (see e.g. column 3, lines 30-39). Accordingly, Alsina, Krzyzanowski, Walker, Archer and Liu are considered to teach, to one of ordinary skill in the art, a method like that of claim 1.
As per claim 2, Alsina teaches that determining that the vehicle is not in motion can comprise determining that the vehicle is in a parked position (see e.g. paragraph 0082, 0118 and 0142). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a method like that of claim 2.
As per claim 3, it would have been obvious, as is described above, to modify the interface taught by Alsina such that the video user interface element is presented as a tab like taught by Krzyzanowski, which corresponds to a plurality of video content identifiers and wherein selection of the video user interface tab results in a display of the plurality of video content identifiers. Krzyzanowski suggests that each of the video content identifiers can be selected to display video content associated with the selected video content identifier (see e.g. paragraphs 0254-0256, and FIGS. 41 and 42). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a method like that of claim 3.
As per claim 4, Alsina teaches that the vehicle content interface (i.e. the accessory device) can be coupled to a user equipment device (e.g. to another display device) whereby the vehicle content interface transmits video content to the user equipment device for display (see e.g. paragraphs 0027-0028). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a method like that of claim 4.
As per claim 5, it would have been obvious, as is described above, to modify the interface taught by Alsina such that the video user interface element is presented as a tab like taught by Krzyzanowski, which corresponds to a plurality of video content identifiers and wherein selection of the video user interface tab results in a display of the plurality of video content identifiers. Krzyzanowski discloses that, based at least in part on receiving user input to view one of the plurality of video content identifiers, video content associated with the one of the plurality of video content identifiers is displayed (see e.g. paragraphs 0254-0256, and FIGS. 41 and 42). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a method like that of claim 5.
As per claim 6, Alsina teaches displaying, while the vehicle is not in motion, one of a podcast user interface element, a radio user interface element or a music user interface element on the user interface (see e.g. paragraph 0163 and FIG. 9B: like noted above, Alsina discloses that the GUI defined by the PMD for display by the accessory device can comprise an audio user interface element, e.g. an “Albums,” “Songs,” or “Audiobooks” menu item. The “Albums” or “Songs” menu item can be considered a “music user interface element.” Alsina suggests that audio functionalities, including understandably those accessed by the “Albums” or “Songs” menu items, are accessible when the vehicle is not in motion – see e.g. paragraphs 0080-0082. Consequently, it is apparent that the “Albums” and “Songs” menu items, i.e. music user interface elements, are displayed by the accessory device while the vehicle is not in motion.). As noted above, it would have been obvious to modify the interface taught by Alsina so that such audio user interface elements are presented as tabs like taught by Krzyzanowski. Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a method like that of claim 6.
As per claim 8, Alsina teaches displaying, while the vehicle is in motion, one of a podcast user interface element, a radio user interface element or a music user interface element on the user interface (see e.g. paragraph 0163 and FIG. 9B: like noted above, Alsina discloses that the GUI defined by the PMD for display by the accessory device can comprise an audio user interface element, e.g. an “Albums,” “Songs,” or “Audiobooks” menu item. The “Albums” or “Songs” menu item can be considered a “music user interface element.” Alsina discloses that the GUI of FIG. 9B omits or grays-out display of a “Videos” menu item – see e.g. paragraph 0163 – and discloses that such an omission of video functionality occurs when the vehicle is in motion – see e.g. paragraphs 0067, 0081-0082, 0118, and 0142. Accordingly, it is apparent that the GUI of FIG. 9B, which comprises a music user interface element such as a “Songs” menu item, is displayed by the accessory device while the vehicle is in motion.). As noted above, it would have been obvious to modify the interface taught by Alsina so that such audio user interface elements are presented as tabs like taught by Krzyzanowski. Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a method like that of claim 8.
Regarding claim 11, Alsina teaches providing a graphical user interface (“GUI”) on an accessory device, wherein the GUI can be defined and managed by a portable media device (“PMD”) rather than the accessory device (see e.g. paragraph 0007). Alsina particularly discloses that the accessory device can be an in-vehicle media control unit that can be installed in a dashboard of a vehicle (see e.g. paragraph 0027). Like claimed, Alsina describes a system comprising:
a sensor configured to determine whether a vehicle is in motion (see e.g. paragraphs 0050, 0067, 0106 and 0118: Alsina discloses that the accessory can receive information such as a current speed or gear that indicates whether the vehicle is in motion, and then provide such information to the PMD. The system described by Alsina necessarily comprises at least one sensor that determines whether the vehicle is in motion.);
a control circuitry coupled to the sensor (see e.g. paragraphs 0024, 0027-0030, 0041-0043, 0047, 0050 and 0067: Alsina teaches that the accessory and the PMD are coupled to the sensor so as to receive the information indicating whether the vehicle is in motion. The PMD and the accessory, or the processors thereof, can be considered “control circuitry” like claimed.), the control circuitry configured to:
while the vehicle is in motion at a first time, display, via a vehicle content interface, one or more audio user interface elements (see e.g. paragraph 0163 and FIG. 9B: Alsina discloses that the GUI defined by the PMD for display by the accessory device can comprise “Albums,” “Songs,” and “Audiobooks” menu items. The “Albums,” “Songs,” and “Audiobooks” menu items are each considered an audio user interface element. Alsina discloses that the GUI of FIG. 9B omits or grays-out display of a “Videos” menu item – see e.g. paragraph 0163 – and discloses that such an omission of video functionality occurs when the vehicle is in motion – see e.g. paragraphs 0067, 0081-0082, 0118, and 0142. Accordingly, it is apparent that the GUI of FIG. 9B, which comprises audio user interface elements such as the “Songs” and “Audiobooks” menu items, is displayed by the accessory device, i.e. on a “vehicle content interface,” while the vehicle is in motion at a first time.); and
based at least in part on determining that the vehicle is not in motion at a second time, display, via the vehicle content interface, the user interface comprising an option to play on the user interface recorded video content (see e.g. paragraphs 0081-0082, 0118 and 0142: Alsina teaches that, responsive to receiving an indication that the vehicle is not in motion, e.g. if the vehicle placed in park, the PMD can allow video playback functionality to become available. Accordingly, it is apparent that when the vehicle is determined to be in park, i.e. not in motion at a second time, the GUI of FIG. 9B is displayed by the accessory device on the vehicle content interface, but with the “Videos” menu item also included therein. The “Videos” item is considered an option to play on the user interface recorded video content.).
Accordingly, Alsina teaches a system similar to that of claim 11, but does not explicitly disclose that the one or more audio user interface elements are tabs, each of which corresponds to a plurality of content identifiers on a user interface, and wherein selection of one of the audio user interface tabs results in a display of a plurality of audio content identifiers corresponding to the selected audio user interface tab on the user interface, as is required by claim 11. Moreover, Alsina does not teach detecting a request to record video content using a first device, wherein the request comprises an option to make the recorded video content available for viewing in the vehicle, and whereby the option to play on the user interface the recorded video content is particularly displayed with respect to this recorded video content that was recorded with the option to make the recorded video content available for viewing in the vehicle, as is further required by claim 11. Alsina also does not teach, prior to selection of the option to play on the user interface the recorded video content: (i) determining that the recorded video content comprises content related to a non-geographic attribute; and (ii) based at least in part on determining that the vehicle is in a geographic location in proximity to a landmark associated with the non-geographic attribute, displaying, via the vehicle content interface, an advertisement associated with the non-geographic attribute, as is further required by claim 11.
Krzyzanowski nevertheless describes a user interface for a media application, wherein the interface comprises one or more audio user interface tabs (e.g. a “music” tab and a “podcasts” tab), each of which corresponds to a plurality of content identifiers (e.g. songs, podcasts) and wherein selection of one of the audio user interface tabs results in a display of a plurality of content identifiers corresponding to the selected audio user interface tab (see e.g. paragraphs 0250-0253 and 0259-0261, and FIGS. 38, 39 and 44). Krzyzanowski further discloses that the interface for the media application also comprises a video user interface tab, wherein selection of the video user interface tab results in a display of a plurality of video content identifiers corresponding to the selected video user interface tab (see e.g. paragraphs 0250 and 0254-0255, and FIG. 41).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina and Krzyzanowski before him prior to the effective filing date of the claimed invention, to modify the interface taught by Alsina such that the audio user interface elements and the video user interface element (i.e. the “Videos” menu item) are presented as tabs like taught by Krzyzanowski, each of which corresponds to a plurality of content identifiers, and wherein selection of one of the audio user interface tabs results in a display of a plurality of audio content identifiers corresponding to the selected audio user interface tab, and wherein selection of the video user interface tab (when displayed) results in a display of a plurality of video content identifiers corresponding to the selected video user interface tab. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user to efficiently access audio or video content for playback, as is evident from Krzyzanowski (see e.g. paragraphs 0250-0255, and FIG. 41, and FIGS. 38, 39, 41 and 44).
Walker generally teaches enabling a user to select video programming for recording using a user equipment device located in a home network, wherein the user can configure the delivery of recorded content to different user equipment devices in the home network (see e.g. paragraph 0007). Regarding the claimed invention, Walker particularly teaches detecting a request to record video content using a first device, wherein the request comprises an option to make the recorded video content available for viewing in a vehicle (see e.g. paragraph 0119 and FIG. 8a: Walker describes a user interface presented by a first device that enables a user to request to record a video program. Walker discloses that the interface comprises an option to share the recording with other user equipment devices on the home network – see e.g. paragraph 0124 and FIG. 8a. Walker further discloses that the other devices can comprise a vehicle entertainment device – see e.g. paragraph 0050. Accordingly, the request can comprise an option to make the recorded video content available for viewing on other devices, including in a vehicle. Walker also discloses that the user can set particular options for delivering the recorded content to the vehicle – see e.g. paragraphs 0050, 0126 and 0173.). It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski and Walker before him prior to the effective filing date of the claimed invention, to modify the system taught by Alsina and Krzyzanowski so as to detect a request to record video content using a first device, wherein the request comprises an option to make the recorded video content available for viewing in the vehicle like taught by Walker, and whereby the option to play on the user interface the recorded video content would particularly be for this recorded video content that was recorded with the option to make the recorded video content available for viewing in the vehicle. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user to efficiently record video for access on a plurality of different devices, as is suggested by Walker (see e.g. paragraphs 0003-0007).
Archer generally describes methods and systems for the personalization of interactive media guidance applications, e.g. by providing targeted advertisements to a user, based on recording-related actions (see e.g. paragraphs 0008-0010). Particularly, regarding the claimed invention, Archer teaches determining, prior to receiving a user selection of an option to play recorded video content, that the recorded video content comprises content related to a non-geographical attribute, and then displaying an advertisement associated with the non-geographical attribute (see e.g. paragraphs 0068 and 0073, and FIG. 8: Archer describes a media guidance application that displays, in addition to selectable media items, a targeted advertisement to the user. Archer discloses that the targeted advertisement is selected for display in the media guidance application based on a user profile, which includes the user’s content preferences determined from recording-related actions of the user – see e.g. paragraphs 0009-0010, 0064 and 0073. The recording related actions that indicate the user’s content preferences include recording a media program – see e.g. paragraphs 0009 and 0030. Accordingly, it is apparent that the user can request to record a media program, whereby prior to the user later requesting to actually view the recorded media program, the system described by Archer: (i) determines that the recorded media program comprises content related to a non-geographical attribute, i.e. indicates user content preference; and (ii) selects and displays, in the media guidance application, an advertisement associated at least in part with the non-geographical attribute).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker and Archer before the effective filing date of the claimed invention, to modify the system taught by Alsina, Krzyzanowski and Walker so as to further include, prior to selection of the option to play on the user interface the recorded video content: (i) determining that the recorded video content comprises content related to a non-geographical attribute; and (ii) displaying (i.e. via the vehicle content interface) an advertisement associated with the non-geographical attribute, as is taught by Archer. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user interface to be further personalized to the user, as is taught by Archer (see e.g. paragraphs 0006-0007). Accordingly, Alsina, Krzyzanowski, Walker and Archer teach a system similar to that of claim 11, which displays an advertisement associated with a non-geographical attribute, but do not explicitly disclose that the advertisement is displayed based at least in part on determining that the vehicle is in a geographical location in proximity to a landmark associated with the non-geographical attribute, as is further required by claim 11.
Liu nevertheless generally teaches determining that user-requested content comprises content related to a non-geographic attribute (e.g. a topic and/or user interest) and, based at least in part on determining that the user is in a geographical location proximate to a landmark (e.g. a restaurant or other business) associated with the non-geographic attribute, displaying via the mobile device interface an advertisement associated with the non-geographical attribute (see e.g. column 2, line 53 – column 3, line 39; column 7, lines 8-55; column 8, lines 9-17; column 10, line 38 – column 11, line 16; column 12, lines 39-43; column 13, lines 17-60; column 14, lines 26-35; and column 16, lines 11-39).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker, Archer and Liu before the effective filing date of the claimed invention, to modify the system taught by Alsina, Krzyzanowski, Walker and Archer such that the advertisement is further displayed (i.e. via the vehicle content interface) based at least in part on determining that the user (and thus the vehicle) is in a geographical location proximate to a landmark associated with the non-geographic attribute, as is taught by Liu. It would have been advantageous to one of ordinary skill to utilize such a combination because it would provide for advertisements that are more likely to be selected by the user, as is evident from Liu (see e.g. column 3, lines 30-39). Accordingly, Alsina, Krzyzanowski, Walker, Archer and Liu are considered to teach, to one of ordinary skill in the art, a system like that of claim 11.
As per claim 13, it would have been obvious, as is described above, to modify the interface taught by Alsina such that the video user interface element is presented as a tab like taught by Krzyzanowski, which corresponds to a plurality of video content identifiers and wherein selection of the video user interface tab results in a display of the plurality of video content identifiers. Krzyzanowski suggests that each of the video content identifiers can be selected to display video content associated with the selected video content identifier (see e.g. paragraphs 0254-0256, and FIGS. 41 and 42). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a system like that of claim 13.
As per claim 14, Alsina teaches that the vehicle content interface (i.e. the accessory device) can be coupled to a user equipment device (e.g. to another display device) whereby the vehicle content interface transmits video content to the user equipment device for display (see e.g. paragraphs 0027-0028). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a system like that of claim 14.
As per claim 15, it would have been obvious, as is described above, to modify the interface taught by Alsina such that the video user interface element is presented as a tab like taught by Krzyzanowski, which corresponds to a plurality of video content identifiers and wherein selection of the video user interface tab results in a display of the plurality of video content identifiers. Krzyzanowski discloses that, based at least in part on receiving user input to view one of the plurality of video content identifiers, video content associated with the one of the plurality of video content identifiers is displayed (see e.g. paragraphs 0254-0256, and FIGS. 41 and 42). Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a system like that of claim 15.
As per claim 16, Alsina teaches displaying, while the vehicle is not in motion, one of a podcast user interface element, a radio user interface element or a music user interface element on the user interface (see e.g. paragraph 0163 and FIG. 9B: like noted above, Alsina discloses that the GUI defined by the PMD for display by the accessory device can comprise an audio user interface element, e.g. an “Albums,” “Songs,” or “Audiobooks” menu item. The “Albums” or “Songs” menu item can be considered a “music user interface element.” Alsina suggests that audio functionalities, including understandably those accessed by the “Albums” or “Songs” menu items, are accessible when the vehicle is not in motion – see e.g. paragraphs 0080-0082. Consequently, it is apparent that the “Albums” and “Songs” menu items, i.e. music user interface elements, are displayed by the accessory device while the vehicle is not in motion.). As noted above, it would have been obvious to modify the interface taught by Alsina so that such audio user interface elements are presented as tabs like taught by Krzyzanowski. Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a system like that of claim 16.
As per claim 18, Alsina teaches displaying, while the vehicle is in motion, one of a podcast user interface element, a radio user interface element or a music user interface element on the user interface (see e.g. paragraph 0163 and FIG. 9B: like noted above, Alsina discloses that the GUI defined by the PMD for display by the accessory device can comprise an audio user interface element, e.g. an “Albums,” “Songs,” or “Audiobooks” menu item. The “Albums” or “Songs” menu item can be considered a “music user interface element.” Alsina discloses that the GUI of FIG. 9B omits or grays-out display of a “Videos” menu item – see e.g. paragraph 0163 – and discloses that such an omission of video functionality occurs when the vehicle is in motion – see e.g. paragraphs 0067, 0081-0082, 0118, and 0142. Accordingly, it is apparent that the GUI of FIG. 9B, which comprises a music user interface element such as a “Songs” menu item, is displayed by the accessory device while the vehicle is in motion.). As noted above, it would have been obvious to modify the interface taught by Alsina so that such audio user interface elements are presented as tabs like taught by Krzyzanowski. Accordingly, the above-described combination of Alsina, Krzyzanowski, Walker, Archer and Liu is further considered to teach a system like that of claim 18.
Claims 7, 9, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over combination of Alsina, Krzyzanowski, Walker, Archer and Liu, which is described above, and also over U.S. Patent Application Publication No. 2006/0123353 to Matthews et al. (“Matthews”).
Regarding claims 7 and 17, Alsina, Krzyzanowski, Walker, Archer and Liu teach a method like that of claim 6 and a system like that of claim 16, as is described above, which entail displaying one of a podcast user interface tab, a radio user interface tab or a music user interface tab on a user interface while a vehicle is not in motion. As particularly noted above, it would have been obvious to modify the interface taught by Alsina such that the audio user interface elements (i.e. the “Albums” and “Songs” menu items) and the video user interface element (i.e. the “Videos” menu item) are presented as tabs like taught by Krzyzanowski, each of which corresponds to a plurality of content identifiers, and wherein selection of one of the audio user interface tabs results in a display of a plurality of audio content identifiers corresponding to the selected audio user interface tab, and wherein selection of the video user interface tab (when displayed) results in a display of a plurality of video content identifiers corresponding to the selected video user interface tab. It follows that selecting the video user interface tab would correspond to the option to play on the user interface the recorded video content. Alsina, Krzyzanowski, Walker, Archer and Liu, however, do not explicitly teach, while the vehicle is not in motion, decreasing a horizontal size of the video user interface tab, the podcast user interface tab, the radio user interface tab and the music user interface tab on the user interface, as is required by claims 7 and 17.
Analogous to the tabs taught by Alsina and Krzyzanowski, Matthews teaches taskbar buttons that are selectable to display an associated application and its content (see e.g. paragraphs 0031 and 0032). Matthews particularly discloses that the taskbar buttons can be sized based on the amount of taskbar buttons to present (see e.g. paragraph 0053). It is therefore apparent that removing a taskbar button can result in the sizes of other taskbar buttons to be adjusted (i.e. increased in size) to accommodate for the space freed-up by the removed taskbar button, and that conversely, adding a taskbar button can result in the sizes of the other taskbar buttons to be adjusted (i.e. decreased in size) to make space for the added taskbar button.
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker, Archer, Liu and Matthews before him prior to the effective filing date of the claimed invention, to modify the interface taught by Alsina, Krzyzanowski, Walker, Archer and Liu so that when the video user interface element (i.e. tab) is included in the user interface (i.e. while the vehicle is not in motion), the horizontal sizes the other user interface tabs (i.e. the podcast user interface tab, the radio user interface tab and the music user interface tab) are adjusted (i.e. decreased in size), like done with the taskbar buttons taught by Matthews. It would have been advantageous to one of ordinary skill to utilize such a combination because it would maximize the space available to display the tabs, as is evident from Matthews (see e.g. paragraph 0053). Accordingly, Alsina, Krzyzanowski, Walker, Archer, Liu and Matthews are considered to teach, to one of ordinary skill in the art, a method like that of claim 7 and a system like that of claim 17.
Regarding claims 9 and 19, Alsina, Krzyzanowski, Walker, Archer and Liu teach a method like that of claim 8 and a system like that of claim 18, as is described above, which entail displaying one of a podcast user interface tab, a radio user interface tab or a music user interface tab on a user interface while a vehicle is in motion. Alsina, Krzyzanowski, Walker, Archer and Liu, however, do not explicitly teach, while the vehicle is in motion, increasing a horizontal size of the podcast user interface tab, the radio user interface tab and the music user interface tab on the user interface, as is required by claims 9 and 19.
Nevertheless, like noted above, Matthews describes taskbar buttons that are analogous to the tabs taught by Alsina, Krzyzanowski, Walker, Archer and Liu, and that can be sized based on the amount of taskbar buttons to present (see e.g. paragraph 0053). It is therefore apparent that removing a taskbar button can result in the sizes of other taskbar buttons to be adjusted (i.e. increased in size) to accommodate for the space freed-up by the removed taskbar button, and that conversely, adding a taskbar button can result in the sizes of the other taskbar buttons to be adjusted (i.e. decreased in size) to make space for the added taskbar button.
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker, Archer, Liu and Matthews before him prior to the effective filing date of the claimed invention, to modify the interface taught by Alsina, Krzyzanowski, Walker, Archer and Liu so that when the video user interface element (i.e. tab) is excluded from the user interface (i.e. while the vehicle is in motion), the horizontal sizes the other user interface tabs (i.e. the podcast user interface tab, the radio user interface tab and the music user interface tab) are adjusted (i.e. increased in size), like done with the taskbar buttons taught by Matthews. It would have been advantageous to one of ordinary skill to utilize such a combination because it would maximize the space available to display the tabs, as is evident from Matthews (see e.g. paragraph 0053). Accordingly, Alsina, Krzyzanowski, Walker, Archer, Liu and Matthews are considered to teach, to one of ordinary skill in the art, a method like that of claim 9 and a system like that of claim 19.
Claims 31 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Alsina, Krzyzanowski, Walker, Archer and Liu, which is described above, and also over U.S. Patent Application Publication No. 2014/0309870 to Ricci et al. (“Ricci”).
Regarding claims 31 and 32, Alsina, Krzyzanowski, Walker, Archer and Liu teach a method like that of claim 1 and a system like that of claim 11, as is described above, and which entail displaying, via a vehicle content interface, an advertisement associated with a non-geographic attribute. As particularly noted above, it would have been obvious to modify the method and system taught by Alsina, Krzyzanowski and Walker so as to further determine that the media content (i.e. recorded video content) comprises content related to a non-geographic attribute, and to display (i.e. via the vehicle content interface) an advertisement associated with the non-geographic attribute, as is taught by Archer. Archer further suggests that the advertisement can comprise audio and video content (see e.g. paragraph 0074). Alsina, Krzyzanowski, Walker, Archer and Liu, however, do not explicitly teach, based at least in part on determining that the vehicle is in motion at a third time, causing an audio portion of the advertisement to be played without playing of video content of the advertisement, as is required by claims 31 and 32.
Ricci nevertheless generally teaches, based at least in part on determining that a vehicle is in motion, causing an audio portion of video content to be played without playing the corresponding video content (see e.g. paragraphs 0751 and 0756-0758).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker, Archer, Liu and Ricci before him prior to the effective filing date of the claimed invention, to modify the interface taught by Alsina, Krzyzanowski, Walker, Archer and Liu so that, based at least in part on determining that the vehicle is in motion (e.g. at a third time), an audio portion of the video content (i.e. the advertisement) is caused to be played without playing the corresponding video content, as is taught by Ricci. It would have been advantageous to one of ordinary skill to utilize such a combination because it can reduce driver distraction, as is taught by Ricci (see e.g. paragraphs 0751 and 0756-0758). Accordingly, Alsina, Krzyzanowski, Walker, Archer, Liu and Ricci are considered to teach, to one of ordinary skill in the art, a method like that of claim 31 and a system like that of claim 32.
Claim 33 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Alsina, Krzyzanowski, Walker, Archer and Liu, which is described above, and also over U.S. Patent Application Publication No. 2006/0155429 to Boone et al. (“Boone”).
As described above, Alsina, Krzyzanowski, Walker, Archer and Liu teach a method like that of claim 1, which includes displaying, based on determining that a vehicle is not in motion, an option to play on a user interface recorded video content that was recorded with an option to make the recorded video content available for viewing in the vehicle. Alsina, Krzyzanowski, Walker, Archer and Liu, however, do not explicitly teach that the option to play on the user interface the recorded video content is a first option, wherein based at least in part on the determination that the vehicle is not in motion, the user interface displays a second option to play the recorded video content that was recorded with the option to make the recorded video content available for viewing in the vehicle on a second device that is in network communication with the vehicle, as is required by claim 33.
Boone nevertheless teaches presenting a first option to play on a user interface recorded video content available for viewing in a vehicle, and a second option to play the recorded video content on a second device that is in network communication with the vehicle (see e.g. paragraph 0066).
It would have been obvious to one of ordinary skill in the art, having the teachings of Alsina, Krzyzanowski, Walker, Archer, Liu and Boone before him prior to the effective filing date of the claimed invention, to modify the interface taught by Alsina, Krzyzanowski, Walker, Archer and Liu so that the option, which is displayed when the vehicle is not in motion, to play on the user interface the recorded video content is a first option, wherein the user interface also displays a second option (i.e. again, when the vehicle is not in motion) to play the recorded video content (i.e. the recorded video content that was recorded with the option to make the recorded video content available for viewing in the vehicle) on a second device that is in network communication with the vehicle, as is taught by Boone. It would have been advantageous to one of ordinary skill to utilize such a combination because it would enable the user to view the recorded video content at a more preferential device, as is suggested by Boone (see e.g. paragraph 0066). Accordingly, Alsina, Krzyzanowski, Walker, Archer, Liu and Boone are considered to teach, to one of ordinary skill in the art, a method like that of claim 33.
Claims 35 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Alsina, Krzyzanowski, Walker, Archer and Liu, which is described above, and also over U.S. Patent Application Publication No. 2015/0026708 to Ahmed et al. (“Ahmed”).
Regarding claim 35, Alsina, Krzyzanowski, Walker, Archer and Liu teach a method like that of claim 1, as is described above, and which entails determining that recorded video content comprises content related to a non-geographical attribute, and displaying, via a vehicle content interface, an advertisement associated with the non-geographical attribute. Alsina, Krzyzanowski, Walker, Archer and Liu, however, do not explicitly disclose that the content related to the non-geographical attribute comprises con