Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is responsive to application No. 17/757,169 filed on 11/17/2025. Claim(s) 24-46 is/are pending and have been examined.
Response to Arguments
Applicant’s arguments with respect to claim(s) 24-46 have been considered but are moot in view of the new ground(s) of rejection.
Although a new ground(s) of rejection has been made, some of Applicant’s arguments need to be addressed.
Applicants assert on P.11 of 13 that “Further, and as exemplified above, the cited references also fail to disclose or suggest a system in which multiple data sources, including a primary data source and one or more secondary data sources, are mutually unaware of each other's data contributions to the media player. The mutually-unaware, distributed sourcing of auxiliary elements is a privacy-preserving architecture not taught or suggested in the cited references, which instead disclose systems in which data is centrally coordinated or exchanged between servers. In contrast, one or more embodiments within the scope of the claims employ structural separation of data sources, with the media player orchestrating their respective contributions solely through the control data without any knowledge shared between the sources. The cited provides no teaching or rationale that would have led a person of ordinary skill to modify any reference to achieve this architecture.”
In response, the Examiner respectfully disagrees. This limitation is taught by the combination Bulkowski, where Bulkowski teaches wherein a primary data source, and one or more secondary data sources, are mutually unaware of data provided by the other to a media player (Paragraph 0011 teaches data can come from many different sources. Data from distributed and mutually unaware sources may be able to run in a single integrated environment. Paragraph 0037, 0039 teaches retrieval of different types of data. Paragraph 0090 teaches data from multiple independent and mutually unaware sources can coexist in a single system. Where data can be set up without any need for knowledge of data associated with something else).
Also note that, the Lennon reference has a media player that reads and interprets the control data, in order to create and fetch any required assets in order to recreate the auxiliary data locally.
Please also see the Office Action below where the claim limitation(s) is/are rejected under Lennon, Simpson, Bulkowski, and Hampson.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 24-28, 30, 31, and 43-46 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), and further in view of Hampson et al. (US 2016/0234553).
Consider claim 24, Lennon teaches a method for distributing video content across a network (Fig.1), the method comprising:
providing video data to a primary data source (Paragraph 0088 teaches creating videos and sending the video to a control centre 20);
associating control data with the video data (Paragraph 0088 teaches user who creates the video are able to add auxiliary data such as effects, but when the data is being transmitted across a network 30, only the raw video data is sent together with some control information in the form of metadata. Paragraph 0090 teaches at the creator a data structure is generated comprising video data and the control data. The data structure is transmitted to the control centre. Paragraph 0091 teaches control centre 20 may alter the associated control data identifying the specified effects to be applied to the raw video data if desired. Paragraph 0092 teaches the synchronization process facilitates raw video data to be transmitted quickly and unimpeded through the network 30 along with the associated control data);
broadcasting the video data with associated control data from the primary data source to one or more user devices across the network, wherein each user device comprises a respective media player which is configured to locally interpret the control data during playback of the video data, wherein the control data defines one or more elements of auxiliary data to be created by the media player including the elements of the auxiliary data which are to be retrieved from one or more secondary data sources (Paragraph 0008 teaches broadcasting the video data and control data to one or more second devices across the network. Paragraph 0088 teaches creator adds auxiliary data such as graphics, customizable text overlays, special effects, audio, etc, but when transmitted over the network only the raw video data is sent together with some control information in the form of metadata. Paragraph 0090 teaches at the creator a data structure is generated comprising video data and the control data. The data structure is transmitted to the control centre. Paragraph 0091 teaches a second application creates the specified effects locally on the device. Second application can determine which specified effects to create locally on the tablet and if required which to fetch from locally stored or remotely thereof. The fetch process creates specified effects in realtime and fetches any assets that may be associated with the specified effects as instructed by the control data. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. Paragraph 0123 teaches media player within the second application receives the video data and the associated control data, at step 302. The multi-layer media player interprets the control data, at step 303. Paragraph 0158 teaches second smart device, reads the control data as described in VSML which interprets the manner in which effects, text, animation etc should be played back as a video package);
creating, in real time and locally on the respective user device, the auxiliary data defined by the control data while the media player is playing the video data locally on the respective user devices, wherein the created auxiliary data is overlaid on top of the video data and removed from the video data in accordance with timing instructions defined by the control data (Paragraph 0090 teaches a media player that is operable to read the data structure 32 so that when video is playing the specified effects are applied at the appropriate times. When video is viewed by end user tablet 25 synchronously creates the high quality graphics, text and special effects via the media player 126. These effects are then overlaid by the multi-layer media player onto the raw video. Paragraph 0091 second application can determine which specified effects to create locally on the tablet and if required which to fetch from locally stored or remotely thereof. The fetch process creates specified effects in realtime and fetches any assets that may be associated with the specified effects as instructed by the control data. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. The high quality graphics, text and/or special effects are simultaneously overlaid onto the raw video data when viewed by the end user using the media player 126. Paragraph 0123 teaches multi-layer media player interprets the control data and recreates, builds and assembles the specified effects and graphical locally for overlay onto the video data. Fig.7, Paragraph 0124 teaches how layering of specified graphics and effect is achieved. Layer 2 is to be applied during the time interval t(2) to t(4); Layer 3 is to be applied during the time interval t(3) to t(4), and so on, until the last later, Layer n, is applied between t(n) and t(n+1). Paragraph 0128 teaches control data includes machine readable markup language that represents video data elements in a textual format that the media player can interpret and compose at playback. Video data elements that are described by the VSLM include but are not limited to textured blocks, text, images, downloadable video assets, streaming video assets and other graphical elements. Paragraph 0129 teaches VSML consists of a JSON representation of a video projects, which is separated into segments, layers, and elements. An example given, is a single video stream with an image watermark overlay in the top left corner of the screen that starts at time 3 and animates out at after 10 seconds);
wherein the control data defines one or more elements of the auxiliary data to be created by the media player including the elements of the auxiliary data which are to be retrieved from one or more secondary data sources (Paragraph 0088 teaches creator adds auxiliary data such as graphics, customizable text overlays, special effects, audio, etc, but when transmitted over the network only the raw video data is sent together with some control information in the form of metadata. Paragraph 0090 teaches at the creator a data structure is generated comprising video data and the control data. The data structure is transmitted to the control centre. Paragraph 0091 teaches a second application creates the specified effects locally on the device. Second application can determine which specified effects to create locally on the tablet and if required which to fetch from locally stored or remotely thereof. The fetch process creates specified effects in realtime and fetches any assets that may be associated with the specified effects as instructed by the control data. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. Paragraph 0123 teaches media player within the second application receives the video data and the associated control data, at step 302. The multi-layer media player interprets the control data, at step 303. Paragraph 0158 teaches second smart device, reads the control data as described in VSML which interprets the manner in which effects, text, animation etc should be played back as a video package),
wherein the control data defines specific times during the playing of the video data when the one or elements of the auxiliary data are overlaid on top of and removed from the video data (Paragraph 0090 teaches a media player that is operable to read the data structure 32 so that when video is playing the specified effects are applied at the appropriate times. When video is viewed by end user tablet 25 synchronously creates the high quality graphics, text and special effects via the media player 126. These effects are then overlaid by the multi-layer media player onto the raw video. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. The high quality graphics, text and/or special effects are simultaneously overlaid onto the raw video data when viewed by the end user using the media player 126. Paragraph 0123 teaches multi-layer media player interprets the control data and recreates, builds and assembles the specified effects and graphical locally for overlay onto the video data. Fig.7, Paragraph 0124 teaches how layering of specified graphics and effect is achieved. Layer 2 is to be applied during the time interval t(2) to t(4); Layer 3 is to be applied during the time interval t(3) to t(4), and so on, until the last later, Layer n, is applied between t(n) and t(n+1). Paragraph 0128 teaches control data includes machine readable markup language that represents video data elements in a textual format that the media player can interpret and compose at playback. Video data elements that are described by the VSLM include but are not limited to textured blocks, text, images, downloadable video assets, streaming video assets and other graphical elements. Paragraph 0129 teaches VSML consists of a JSON representation of a video projects, which is separated into segments, layers, and elements. An example given, is a single video stream with an image watermark overlay in the top left corner of the screen that starts at time 3 and animates out at after 10 seconds).
Lennon does not explicitly teach wherein the control data defines one or more elements of auxiliary data including the elements of the auxiliary data which are to be retrieved from the primary data source;
wherein the primary data source, and the one or more secondary data sources, are mutually unaware of data provided by the other to the media player and, in operation, one or the other of the primary data source, or the one or more secondary data sources, provides personally identifiable information to the media player only upon consent by a user to whom the personally identifiable information pertains.
In an analogous art, Simpson teaches wherein control data defines one or more elements of auxiliary data including elements of the auxiliary data which are to be retrieved from a primary data source (Paragraph 0014 teaches embedded base programming identification metadata further includes one or more of program time codes, enhancement identifiers, or one or more Internet address. Paragraph 0026 teaches enhanced television content can consist of any of a number of components, such as traditional linear television, metadata and data objects. All of these components can be included as elements in the MPEG-2 Transport Stream that is carried across the broadcast path. Paragraph 0034 teaches path “B” uses the ATSC Mobile DTV transmission system to deliver the enhancements directly to a receiver tuner in the television itself. Paragraph 0040 teaches retuning to the appropriate frequency associated with the Base ID to find associated enhancement information. Paragraph 0048 teaches Base Programming Metadata may include other metadata elements such as Program Time Codes and enhancement synchronization, Enhancement identifiers, and URL's or other resource locators. Paragraph 0051 teaches Linear Programming Identification metadata could include a URL or other resource location mechanism, that would enable the enhanced TV to find or retrieve enhancements to associate with the underlying Base Programming. Paragraph 0055 teaches Base Programming itself may encode certain enhancements, within the data rate of the Enhancement Metadata encoding technology).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon to include wherein control data defines one or more elements of auxiliary data including elements of the auxiliary data which are to be retrieved from a primary data source, as taught by Simpson, for the advantage of providing a source that can be trusted and relied upon to provide necessary data elements when needed, especially the source that provided the main content, whom would be familiar with the type of data elements needed.
Lennon and Simpson do not explicitly teach wherein the primary data source, and the one or more secondary data sources, are mutually unaware of data provided by the other to the media player and, in operation, one or the other of the primary data source, or the one or more secondary data sources, provides personally identifiable information to the media player only upon consent by a user to whom the personally identifiable information pertains.
In an analogous art, Bulkowski teaches wherein a primary data source, and one or more secondary data sources, are mutually unaware of data provided by the other to a media player (Paragraph 0011 teaches data can come from many different sources. Data from distributed and mutually unaware sources may be able to run in a single integrated environment. Paragraph 0037, 0039 teaches retrieval of different types of data. Paragraph 0090 teaches data from multiple independent and mutually unaware sources can coexist in a single system. Where data can be set up without any need for knowledge of data associated with something else).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon and Simpson to include wherein a primary data source, and one or more secondary data sources, are mutually unaware of data provided by the other to a media player, as taught by Bulkowski, for the advantage of making sure data from distributed and mutually unaware sources be able to run in a single integrated environment (Bulkowski – Paragraph 0011), allowing for independent management of different sources/entities, while enabling the system to continue to disseminate and process data accordingly.
Lennon, Simpson, and Bulkowski do not explicitly teach in operation, one or the other of the primary data source, or the one or more secondary data sources, provides personally identifiable information to the media player only upon consent by a user to whom the personally identifiable information pertains.
In an analogous art, Hampson teaches in operation, one or the other of a primary data source, or one or more secondary data sources, provides personally identifiable information to a media player only upon consent by a user to whom the personally identifiable information pertains (Paragraph 0024, 0049, 0059, 0080).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, and Bulkowski to include in operation, one or the other of a primary data source, or one or more secondary data sources, provides personally identifiable information to a media player only upon consent by a user to whom the personally identifiable information pertains, as taught by Hampson, for the advantage of providing greater security over sensitive information, providing user(s) with utmost control over their own information, allowing them the freedom to allow/approve of provision for use.
Consider claim 46, Lennon teaches a system for distributing video content across a network (Fig.1), the system comprising:
a primary data source (Control Centre 20-Fig.1);
one or more user devices (end user smart device 25-Fig.1); and
one or more secondary data sources (Paragraph 0091);
wherein the primary data source is configured to associate control data to video data provided to the primary data source (Paragraph 0088 teaches user who creates the video are able to add auxiliary data such as effects, but when the data is being transmitted across a network 30, only the raw video data is sent together with some control information in the form of metadata. Paragraph 0090 teaches at the creator a data structure is generated comprising video data and the control data. The data structure is transmitted to the control centre. Paragraph 0091 teaches control centre 20 may alter the associated control data identifying the specified effects to be applied to the raw video data if desired. Paragraph 0092 teaches the synchronization process facilitates raw video data to be transmitted quickly and unimpeded through the network 30 along with the associated control data);
wherein the primary data source is configured to broadcast the video data and associated control data for receipt by the one or more user devices; wherein each user device contains a media player provided thereon which is configured to locally interpret the control data during playback of the video data and to create, in real-time and locally upon the respective user device, auxiliary data defined by the control data while the video data is being played on the user device, wherein the created auxiliary data is overlaid on top of the video data and removed from the video data in accordance with timing instructions defined by the control data (Paragraph 0008 teaches broadcasting the video data and control data to one or more second devices across the network. Paragraph 0088 teaches creator adds auxiliary data such as graphics, customizable text overlays, special effects, audio, etc, but when transmitted over the network only the raw video data is sent together with some control information in the form of metadata. Paragraph 0090 teaches at the creator a data structure is generated comprising video data and the control data. The data structure is transmitted to the control centre. A media player that is operable to read the data structure 32 so that when video is playing the specified effects are applied at the appropriate times. When video is viewed by end user tablet 25 synchronously creates the high quality graphics, text and special effects via the media player 126. These effects are then overlaid by the multi-layer media player onto the raw video. Paragraph 0091 teaches a second application creates the specified effects locally on the device. Second application can determine which specified effects to create locally on the tablet and if required which to fetch from locally stored or remotely thereof. The fetch process creates specified effects in realtime and fetches any assets that may be associated with the specified effects as instructed by the control data. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. The high quality graphics, text and/or special effects are simultaneously overlaid onto the raw video data when viewed by the end user using the media player 126. Paragraph 0123 teaches media player within the second application receives the video data and the associated control data, at step 302. The multi-layer media player interprets the control data, at step 303, and recreates, builds and assembles the specified effects and graphical locally for overlay onto the video data. Fig.7, Paragraph 0124 teaches how layering of specified graphics and effect is achieved. Layer 2 is to be applied during the time interval t(2) to t(4); Layer 3 is to be applied during the time interval t(3) to t(4), and so on, until the last later, Layer n, is applied between t(n) and t(n+1). Paragraph 0128 teaches control data includes machine readable markup language that represents video data elements in a textual format that the media player can interpret and compose at playback. Video data elements that are described by the VSLM include but are not limited to textured blocks, text, images, downloadable video assets, streaming video assets and other graphical elements. Paragraph 0129 teaches VSML consists of a JSON representation of a video projects, which is separated into segments, layers, and elements. An example given, is a single video stream with an image watermark overlay in the top left corner of the screen that starts at time 3 and animates out at after 10 seconds. Paragraph 0158 teaches second smart device, reads the control data as described in VSML which interprets the manner in which effects, text, animation etc should be played back as a video package);
wherein the control data defines one or more elements of the auxiliary data to be created by the media player locally on the user devices including elements of the auxiliary data which are to be retrieved from the one or more secondary data sources (Paragraph 0088 teaches creator adds auxiliary data such as graphics, customizable text overlays, special effects, audio, etc, but when transmitted over the network only the raw video data is sent together with some control information in the form of metadata. Paragraph 0090 teaches at the creator a data structure is generated comprising video data and the control data. The data structure is transmitted to the control centre. Paragraph 0091 teaches a second application creates the specified effects locally on the device. Second application can determine which specified effects to create locally on the tablet and if required which to fetch from locally stored or remotely thereof. The fetch process creates specified effects in realtime and fetches any assets that may be associated with the specified effects as instructed by the control data. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. Paragraph 0123 teaches media player within the second application receives the video data and the associated control data, at step 302. The multi-layer media player interprets the control data, at step 303. Paragraph 0158 teaches second smart device, reads the control data as described in VSML which interprets the manner in which effects, text, animation etc should be played back as a video package); and
wherein the control data defines specific times when the video data is played when the one or elements of the auxiliary data are overlaid on top of and removed from the video data (Paragraph 0090 teaches a media player that is operable to read the data structure 32 so that when video is playing the specified effects are applied at the appropriate times. When video is viewed by end user tablet 25 synchronously creates the high quality graphics, text and special effects via the media player 126. These effects are then overlaid by the multi-layer media player onto the raw video. Paragraph 0092 teaches raw video data along with associated control data being transmitted over network 30, where when viewed by the end user using the media player 126 on tablets 25, the specified effects may be recreated locally. The high quality graphics, text and/or special effects are simultaneously overlaid onto the raw video data when viewed by the end user using the media player 126. Paragraph 0123 teaches multi-layer media player interprets the control data and recreates, builds and assembles the specified effects and graphical locally for overlay onto the video data. Fig.7, Paragraph 0124 teaches how layering of specified graphics and effect is achieved. Layer 2 is to be applied during the time interval t(2) to t(4); Layer 3 is to be applied during the time interval t(3) to t(4), and so on, until the last later, Layer n, is applied between t(n) and t(n+1). Paragraph 0128 teaches control data includes machine readable markup language that represents video data elements in a textual format that the media player can interpret and compose at playback. Video data elements that are described by the VSLM include but are not limited to textured blocks, text, images, downloadable video assets, streaming video assets and other graphical elements. Paragraph 0129 teaches VSML consists of a JSON representation of a video projects, which is separated into segments, layers, and elements. An example given, is a single video stream with an image watermark overlay in the top left corner of the screen that starts at time 3 and animates out at after 10 seconds).
Lennon does not explicitly teach wherein the one or more secondary data sources, and the primary data source, are each mutually unaware of data provided by the other to the media player and, in operation, one or the other of the primary data source, or the one or more secondary data sources, provides personally identifiable information to the media player only upon consent by a user to whom the personally identifiable information pertains;
wherein the control data defines one or more elements of the auxiliary data including elements of the auxiliary data which are to be retrieved from the primary data source.
In an analogous art, Simpson teaches wherein control data defines one or more elements of auxiliary data including elements of the auxiliary data which are to be retrieved from a primary data source (Paragraph 0014 teaches embedded base programming identification metadata further includes one or more of program time codes, enhancement identifiers, or one or more Internet address. Paragraph 0026 teaches enhanced television content can consist of any of a number of components, such as traditional linear television, metadata and data objects. All of these components can be included as elements in the MPEG-2 Transport Stream that is carried across the broadcast path. Paragraph 0034 teaches path “B” uses the ATSC Mobile DTV transmission system to deliver the enhancements directly to a receiver tuner in the television itself. Paragraph 0040 teaches retuning to the appropriate frequency associated with the Base ID to find associated enhancement information. Paragraph 0048 teaches Base Programming Metadata may include other metadata elements such as Program Time Codes and enhancement synchronization, Enhancement identifiers, and URL's or other resource locators. Paragraph 0051 teaches Linear Programming Identification metadata could include a URL or other resource location mechanism, that would enable the enhanced TV to find or retrieve enhancements to associate with the underlying Base Programming. Paragraph 0055 teaches Base Programming itself may encode certain enhancements, within the data rate of the Enhancement Metadata encoding technology).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon to include wherein control data defines one or more elements of auxiliary data including elements of the auxiliary data which are to be retrieved from a primary data source, as taught by Simpson, for the advantage of providing a source that can be trusted and relied upon to provide necessary data elements when needed, especially the source that provided the main content, whom would be familiar with the type of data elements needed.
Lennon and Simpson do not explicitly teach wherein the one or more secondary data sources, and the primary data source, are each mutually unaware of data provided by the other to the media player and, in operation, one or the other of the primary data source, or the one or more secondary data sources, provides personally identifiable information to the media player only upon consent by a user to whom the personally identifiable information pertains.
In an analogous art, Bulkowski teaches wherein one or more secondary data sources, and a primary data source, are each mutually unaware of data provided by the other to a media player (Paragraph 0011 teaches data can come from many different sources. Data from distributed and mutually unaware sources may be able to run in a single integrated environment. Paragraph 0037, 0039 teaches retrieval of different types of data. Paragraph 0090 teaches data from multiple independent and mutually unaware sources can coexist in a single system. Where data can be set up without any need for knowledge of data associated with something else).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon and Simpson to include wherein one or more secondary data sources, and a primary data source, are each mutually unaware of data provided by the other to a media player, as taught by Bulkowski, for the advantage of making sure data from distributed and mutually unaware sources be able to run in a single integrated environment (Bulkowski – Paragraph 0011), allowing for independent management of different sources/entities, while enabling the system to continue to disseminate and process data accordingly.
Lennon, Simpson, and Bulkowski do not explicitly teach in operation, one or the other of the primary data source, or the one or more secondary data sources, provides personally identifiable information to the media player only upon consent by a user to whom the personally identifiable information pertains.
In an analogous art, Hampson teaches in operation, one or the other of a primary data source, or one or more secondary data sources, provides personally identifiable information to a media player only upon consent by a user to whom the personally identifiable information pertains (Paragraph 0024, 0049, 0059, 0080).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, and Bulkowski to include in operation, one or the other of a primary data source, or one or more secondary data sources, provides personally identifiable information to a media player only upon consent by a user to whom the personally identifiable information pertains, as taught by Hampson, for the advantage of providing greater security over sensitive information, providing user(s) with utmost control over their own information, allowing them the freedom to allow/approve of provision for use.
Consider claim 25, Lennon, Simpson, Bulkowski, and Hampson teach wherein the control data comprises metadata (Lennon – Paragraph 0088, 0090-0091).
Consider claim 26, Lennon, Simpson, Bulkowski, and Hampson teach wherein the control data comprises a data interchange format/or data storage format (Lennon - Paragraph 0090, 0128-0129).
Consider claim 27, Lennon, Simpson, Bulkowski, and Hampson teach wherein the control data contains instructions defining the elements of the auxiliary data, the elements of the auxiliary data comprising (Lennon - Paragraph 0088, 0090, 0123, 0128) one or more of:
a layout of the auxiliary data relative to the video data;
one or more types of auxiliary data to be provided relative to the video data (Lennon - Paragraph 0128-0129, 0158);
at least a first location from which the auxiliary data is to be retrieved from the primary data source and/or secondary data;
and/or an action to be performed to the auxiliary data when the video playback is ended.
Consider claim 28, Lennon, Simpson, Bulkowski, and Hampson teach wherein the action to be performed to the auxiliary data when the video playback is ended comprises ceasing the creation of the auxiliary data locally on the media player (Lennon – Fig.7, Paragraph 0124, 0129).
Consider claim 30, Lennon, Simpson, Bulkowski, and Hampson teach wherein the one or more types of auxiliary data are provided at different times during playback of the video data (Lennon - Paragraph 0124, 0129, 0158).
Consider claim 31, Lennon, Simpson, Bulkowski, and Hampson teach wherein the one or more types of auxiliary data comprise one or more of: customisable text overlays; graphics; sounds; secondary video data; special effects; and/or live feeds or displays of information (Lennon - Paragraph 0088, 0090, 0123, 0128).
Consider claim 43, Lennon, Simpson, Bulkowski, and Hampson teach wherein the primary data source and/or secondary data sources comprise a cloud and/or local server architecture and/or an API service and/or any data storage format file and/or json file and/or a computing device and/or any data storage format or other suitable data source (Simpson – Paragraph 0014, 0040, 0051, 0053; Lennon – Paragraph 0128-0129).
Consider claim 44, Lennon, Simpson, Bulkowski, and Hampson teach wherein the media player is configured to create and synchronise the auxiliary data in real time with the video data whilst the video data is played on the user device (Lennon – Paragraph 0090-0091).
Consider claim 45, Lennon, Simpson, Bulkowski, and Hampson teach wherein the user devices comprise a smartphone, tablet, laptop or any other suitable computing device (Lennon – Paragraph 0088).
Claim(s) 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), in view of Hampson et al. (US 2016/0234553), and further in view of Reznik et al. (US 2014/0019635).
Consider claim 29, Lennon, Simpson, Bulkowski, and Hampson teach wherein the control data further defines a location from which the auxiliary data is to be retrieved from the primary data source and/or secondary data sources, auxiliary data (Simpson – Paragraph 0014, 0048, 0051), but do not explicitly teach defining a second location from which data is to be retrieved if the data is not available at said first location.
In an analogous art, Reznik teaches defining a second location from which data is to be retrieved if the data is not available at said first location (Paragraph 0120).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, Bulkowski, and Hampson to include defining a second location from which data is to be retrieved if the data is not available at said first location, as taught by Reznik, for the advantage of providing redundancy to the delivery of data/content (Reznik – Paragraph 0120).
Claim(s) 32 and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), in view of Hampson et al. (US 2016/0234553), and further in view of Lundy et al. (US 2008/0092162).
Consider claim 32, Lennon, Simpson, Bulkowski, and Hampson teach auxiliary data (Lennon – Paragraph 0088, 0090, 0092, 0128-0129), but do not explicitly teach data comprises user specific data, wherein the user specific data comprises the personally identifiable information.
In an analogous art, Lundy teaches data comprises user specific data, wherein the user specific data comprises personally identifiable information (Figs.3&4, Paragraph 0048, 0051, 0059 teaches advertisements that comprise user location information).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, Bulkowski, and Hampson to include data comprises user specific data, wherein the user specific data comprises personally identifiable information, as taught by Lundy, for the advantage of providing information that is relevant and pertinent to user(s) of the device.
Consider claim 33, Lennon, Simpson, Bulkowski, Hampson, and Lundy teach wherein the personally identifiable information comprises one or more of: user location (Lundy – Paragraph 0048, 0051, 0059); user age; user gender; user interests or hobbies; user language; user search history; user web history and/or any other suitable user specific information.
Claim(s) 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), in view of Hampson et al. (US 2016/0234553), in view of Lundy et al. (US 2008/0092162), and further in view of Gurha (US 2016/0007083).
Consider claim 34, Lennon, Simpson, Bulkowski, Hampson, and Lundy teach wherein the user specific data is stored upon one or more of the secondary data sources and/or the primary data source and/or user device and/or media player.
In an analogous art, Gurha teaches wherein user specific data is stored upon one or more of secondary data sources (Paragraph 0177, 0174) and/or the primary data source and/or user device and/or media player.
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, Bulkowski, Hampson, and Lundy to include wherein user specific data is stored upon one or more of the secondary data sources and/or the primary data source and/or user device and/or media player, as taught by Gurha, for the advantage of providing other/alternative sources that carry desired information, enabling it to be accessed from various devices.
Claim(s) 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), in view of Hampson et al. (US 2016/0234553), in view of Lundy et al. (US 2008/0092162), and further in view of Walker (US 2015/0256903).
Consider claim 35, Lennon, Simpson, Bulkowski, Hampson, and Lundy teach does not explicitly teach wherein the secondary data sources from which the media player is configured to retrieve the one or more elements of the auxiliary data to be created by the media player are determined based on one or more elements of the user specific data.
In an analogous art Walker teaches wherein secondary data sources from which a media player is configured to retrieve one or more elements of the auxiliary data to be created by the media player are determined based on one or more elements of user specific data (Paragraph 0047, 0063-0064, 0067).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, Bulkowski, Hampson, and Lundy to include wherein secondary data sources from which a media player is configured to retrieve one or more elements of the auxiliary data to be created by the media player are determined based on one or more elements of user specific data, as taught by Walker, for the advantage of acquiring data from appropriate sources, in order to provide relevant content that is of use/interest to the user.
Claim(s) 36-38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), in view of Hampson et al. (US 2016/0234553), and further in view of Gurha (US 2016/0007083).
Consider claim 36, Lennon, Simpson, Bulkowski, and Hampson teach wherein prior to creating the auxiliary data locally on the respective user devices while the media player is playing the video data locally on the respective user devices, the method further comprises: the media player with the secondary data sources, the media player retrieve the auxiliary data from the secondary data sources (Lennon – Paragraph 0091-0092), but do not explicitly teach authenticating the device with the secondary data sources to allow for the device to retrieve the auxiliary data from the secondary data sources.
In an analogous art, Gurha teaches authenticating device with one or more secondary data sources to allow for the device to retrieve auxiliary data from the secondary data sources (Paragraph 0180, 0192).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, Bulkowski, and Hampson to include authenticating device with one or more secondary data sources to allow for the device to retrieve auxiliary data from the secondary data sources, as taught by Gurha, for the advantage of providing an added level of security and access, when trying to obtain more sensitive and/or personal information, in order to ensure access is given to proper device/entity.
Consider claim 37, Lennon, Simpson, Bulkowski, Hampson, and Gurha teach wherein authenticating the media player with the secondary data sources comprises requesting the user to provide their consent for the media player to retrieve one or more elements of the auxiliary data from one or more of the secondary data sources (Lennon - Paragraph 0091-0092; Gurha - Paragraph 0180, 0192).
Consider claim 38, Lennon, Simpson, Bulkowski, Hampson, and Gurha teach wherein authenticating the media player with the secondary data sources comprises verifying that the user has previously provided their consent for the media player to retrieve one or more elements of the auxiliary data from one or more of the secondary data sources (Lennon - Paragraph 0091-0092; Gurha – Paragraph 0192, 0206).
Claim(s) 39-42 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lennon et al. (US 2016/0277781), in view of Simpson et al. (US 2011/0115977), in view of Bulkowski et al. (US 2004/0034875), in view of Hampson et al. (US 2016/0234553), in view of Gurha (US 2016/0007083), and further in view of Wick et al. (US 7,669,213).
Consider claim 39, Lennon, Simpson, Bulkowski, Hampson, and Gurha teach wherein the control data (Lennon – Paragraph 0090, 0128-0129) contains instructions defining what action is to be performed (Lennon – Paragraph 0128, 0158), but does not explicitly teach what action is to be performed if the user's consent is not obtained or verified.
In an analogous art, Wick teaches what action is to be performed if user's consent is not obtained or verified (Col 8: line 55-65, Col 11: lines 50-53).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Lennon, Simpson, Bulkowski, Hampson, and Gurha to include what action is to be performed if user's consent is not obtained or verified, as taught by Wick, for the advantage of enabling the system to take alternative steps, when permission is not granted by the user, allowing for alternative functions.
Consider claim 40, Lennon, Simpson, Bulkowski, Hampson, Gurha, and Wick teach wherein the control data indicates that the video playback on the user device is not to occur on the media player or that pre-defined auxiliary data is to be created during playback of the video on the media player (Lennon – Paragraph 0091-0092, 0123-0124, 0158).
Consider claim 41, Lennon, Simpson, Bulkowski, Hampson, Gurha, and Wick teach wherein the type of pre-defined auxiliary data to be created is defined in the control data (Lennon – Paragraph 0091-0092, 0123-0124, 0158).
Consider claim 42, Lennon, Simpson, Bulkowski, Hampson, Gurha, and Wick teach wherein the pre-defined auxiliary data is retrieved from the primary data source (Simpson – Paragraph 0026, 0040, 0055; Lennon – Paragraph 0091-0092, 0123-0124, 0158).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON K LIN whose telephone number is (571)270-1446. The examiner can normally be reached on Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON K LIN/Primary Examiner, Art Unit 2425