DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is in reply to a Request for Continued Examination filed on February 5, 2026 regarding Application No. 18/739,155. Applicants amended claims 1, 4-5, 8-10, 15-16, and 19-20 and canceled claims 3 and 14. Claims 1-2, 4-13, and 15-20 are pending.
Note: paragraphs [0035]-[0040], which correspond to figure 3, are cited as support for the amendments. (Remarks, p. 6). However, it appears that the amendments may correspond to figures 1-2 and paragraph [0030] of the Specification as filed.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicants’ submission filed on February 5, 2026 has been entered.
Response to Arguments
Applicants’ arguments filed on February 5, 2026 have been fully considered but they are not persuasive.
In response to the argument regarding "receiving, from one or more image sources, one or more streams of pixel data encoded according to a first video coding format associated with a plurality of displays[,] decoding the one or more streams of pixel data as decoded pixel data[,] storing the decoded pixel data as a plurality of frames in a plurality of frame buffers associated with the plurality of displays, respectively[, and] encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to a second video coding format associated with a docking station coupled to the plurality of displays” and “any language in any references” (Remarks, p. 8), the Office respectfully disagrees and/or submits that the recited features are taught and/or suggested by Janus as modified by Luo and Malemezian.
More specifically, figures 1-2 and 4-6 and paragraphs [0015]-[0016], [0019], [0036], [0041], [0044], [0046], and [0053] of Janus teach: receiving, from one or more image sources 630 and 640, one or more streams of pixel data, e.g., streaming video, associated with a plurality of displays 108 and 110; storing the received pixel data as a plurality of frames in a plurality of frame buffers 112 and 114 associated with the plurality of displays 108 and 110, respectively. Also, figures 1-3 and 11 and paragraphs [0001], [0028]-[0031], [0034]-[0035], [0037]-[0039], [0079], [0102], [0118], [0123], [0128], and [0135] of Luo teach: receiving, from one or more image sources 1130 and 1140, one or more streams of pixel data, e.g., streaming video, encoded according to a first video coding format associated with a plurality of displays of 116-112; decoding the one or more streams of pixel data as decoded pixel data; storing the decoded pixel data as a plurality of frames in a frame buffer; encoding the plurality of frames of decoded pixel data stored in the frame buffer according to a second video coding format. Note that Luo discloses transmission of compressed or encoded pixel data over a transmission channel to a device that includes a transcoder. The transcoder includes a decoder that decompresses the pixel data. After decompression, the transcoder stores the pixel data. The transcoder also includes an encoder that re-compresses and formats the pixel data for transmission to other device(s). Paragraph [0001] of Luo. See also paragraph [0030] of the instant application (“… [P]ixel data is often encoded or compressed for transmission over a communication channel.”). In addition, figures 1-2 and 8 and paragraphs [0026], [0031], and [0070] of Malemezian teach: a docking station 802 coupled to a plurality of displays 804 and 806. Thus, Janus as modified by Luo and Malemezian teaches and/or suggests: receiving, from one or more image sources, one or more streams of pixel data encoded according to a first video coding format associated with a plurality of displays (receiving, one or more image sources, one or more streams of pixel data, and associated with a plurality of displays of Janus combined with receiving, one or more image sources, one or more streams of pixel data, encoded according to a first video coding format, and associated with a plurality of displays of Luo); decoding the one or more streams of pixel data as decoded pixel data (one or more streams of pixel data of Janus combined with decoding, one or more streams of pixel data, and decoded pixel data of Luo); storing the decoded pixel data as a plurality of frames in a plurality of frame buffers associated with the plurality of displays, respectively (storing, received pixel data, plurality of frames, plurality of frame buffers associated, and plurality of displays of Janus combined with storing, decoded pixel data, plurality of frames, frame buffer, associated, and plurality of displays of Luo); encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to a second video coding format associated with a docking station coupled to the plurality of displays (plurality of frames of received pixel data, stored, plurality of frame buffers, and plurality of displays of Janus combined with encoding, plurality of frames of decoded pixel data, stored, frame buffer, second video coding format of Luo combined with the docking station, coupled, and plurality of displays of Malemezian).
In response to the argument regarding claim 1, Janus, “the data stored in the frame buffers is derived from decoding a stream of pixel data”, and “‘decoding the one or more streams of pixel data ... as decoded pixel data’ and ‘storing the decoded pixel data as a plurality of frames in a plurality of frame buffers’” (Remarks, pp. 8-9), the Office respectfully submits that the argument is not commensurate with the rejections and one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Also, the Office respectfully submits that Janus teaches the features discussed above and in the rejections and the relevant claimed features are taught and/or suggested by the cited references, as discussed above and in the rejections.
In response to the argument regarding claim 1, Luo, cure, and “aggregating the output of the decoder into a plurality of frame buffers associated with a plurality of displays” (Remarks, pp. 8-9), the Office respectfully submits that the argument is not commensurate with the rejections and one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Also, the Office respectfully submits that Luo teaches the features discussed above and in the rejections and the relevant claimed features are taught and/or suggested by the cited references, as discussed above and in the rejections.
In response to the argument regarding claim 1, Janus and Luo, "decoding the one or more streams of pixel data as decoded pixel data", and "storing the decoded pixel data as a plurality of frames in a plurality of frame buffers associated with the plurality of displays, respectively” (Remarks, p. 9), the Office respectfully disagrees and submits that the recited features are taught and/or suggested by Janus as modified by Luo, as discussed above and in the rejections.
In response to the argument regarding Janus and “encoding the data from the frame buffers” (emphasis in original) (Remarks, p. 9), the Office respectfully submits that Janus teaches the features discussed above and in the rejections, the argument is not commensurate with the rejections, and one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
In response to the argument regarding Luo, cure, and “retrieving and encoding data from a plurality of frame buffers (associated with a plurality of displays)” (Remarks, p. 9), the Office respectfully submits that Luo teaches the features discussed above and in the rejections, the argument is not commensurate with the rejections, and one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
In response to the arguments regarding claim 1, Janus and Luo, “retrieving data from Janus' buffers to be displayed, not encoded” (emphasis in original), “encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to a second video coding format” (Remarks, p. 9), the Office respectfully disagrees and submits that the recited features are taught and/or suggested by Janus as modified by Luo, as discussed above and in the rejections.
In response to the argument regarding other cited references, cure, Janus and Luo, and claim 1 and patentable over the cited references (Remarks, p. 9), the Office respectfully disagrees and submits that Janus and Luo teach the features discussed above and in the rejections and all features of newly amended independent claim 1 are taught and/or suggested by the cited references, as also discussed above and in the rejections. As such, newly amended independent claim 1 is not allowable.
In response to the argument regarding dependent claims 2 and 4-9 and patentable over the cited references (Remarks, p. 9), the Office respectfully disagrees and submits that all features of newly amended independent claim 1 are taught and/or suggested by the cited references, as discussed above and in the rejections. As such, newly amended independent claim 1 is not allowable. In addition, claims 2 and 4-9 are not allowable by virtue of their individual dependences from newly amended independent claim 1 and as discussed in the rejections.
In response to the argument regarding newly amended independent claim 10 and patentable over the cited references (Remarks, pp. 9-10), the Office respectfully disagrees and submits that all features of newly amended independent claim 1, and similarly for newly amended independent claim 10, are taught and/or suggested by the cited references, as discussed above and in the rejections. As such, newly amended independent claims 1 and 10 are not allowable.
In response to the argument regarding dependent claims 11-13 and 15-20 and patentable over the cited references (Remarks, p. 10), the Office respectfully disagrees and submits that all features of newly amended independent claim 1, and similarly for newly amended independent claim 10, are taught and/or suggested by the cited references, as discussed above and in the rejections. As such, newly amended independent claims 1 and 10 are not allowable. In addition, claims 11-13 and 15-20 are not allowable by virtue of their individual dependences from newly amended independent claim 10 and as discussed in the rejections.
For the reasons discussed above and in the rejections, pending claims 1-2, 4-13, and 15-20 are not allowable.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicants are advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 4-5, 8-10, 12, 15-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Janus et al. in US 2014/0015816 A1 (hereinafter Janus) in view of Luo et al. in US 2019/0261010 A1 (hereinafter Luo), in further view of Malemezian et al. in US 2018/0191996 A1 (hereinafter Malemezian).
Regarding claim 1, Janus teaches:
A method (300 in FIG. 3) of processing content (first and second image content in FIG. 3) for multiple displays (108 and 110 in FIG. 1) by a mobile computing device (700 in FIG. 7), comprising (Janus: FIGs. 1, 3, and 7, “[0024] FIG. 3 illustrates a flow diagram of an example process 300 according to various implementations of the present disclosure….”, “[0025] Process 300 may begin at block 302 where first image content may be rendered…. In various implementations, block 302 may involve display engine 104 accessing frame buffer 112 to obtain image data and to render that data into image content to be provided to display 108 as frame data 116….”, “[0026] At block 304 the second image content may be rendered…. In various implementations, block 304 may involve display engine 104 accessing frame buffer 114 to obtain image data and to render that data into image content to be provided to display 110 as frame data 118….”, “[0035] FIG. 6 illustrates an example system 600 in accordance with the present disclosure. In various implementations, system 600 may be a media system although system 600…. For example, system 600 may be incorporated into a… smart device (e.g., smart phone…)….”, “[0054] As described above, system 600 may be embodied in varying physical styles or form factors. FIG. 7 illustrates implementations of a small form factor device 700 in which system 600 may be embodied…. [D]evice 700 may be implemented as a mobile computing device having wireless capabilities….”, and “[0055] As described above, examples of a mobile computing device may include a… smart device (e.g., smart phone…)….”, see also FIGs. 2 and 6):
receiving, from one or more image sources (630 and 640 in FIG. 6), one or more streams of pixel data (e.g., streaming video) associated with a plurality of displays (108 and 110 in FIG. 1) (Janus: FIGs. 1 and 6, “[0016]… [I]n response to DCLK signal 120, displays 108 and 110 may display frame data 116 and 118, respectively, at a rate of sixty (60) hertz. For example, display 108 may display all pixels of frame data 116 sixty times a second or once every one sixtieth ( 1/60) of a second by displaying one pixel of frame data 116 at a real time rate of one pixel per pulse of DCLK signal 120.”, “[0036]… [S]ystem 600 includes a platform 602 coupled to a display 620. Platform 602 may receive content from a content device such as content services device(s) 630 or content delivery device(s) 640 or other similar content sources….”, “[0046]… Examples of content may include any media information including, for example, video, music, medical and gaming information, and so forth.”, and “[0053] Platform 602 may establish one or more logical or physical channels to communicate information. The information may include media information…. Media information may refer to any data representing content meant for a user. Examples of content may include, for example,… videoconference, streaming video,… and so forth.”, see also FIGs. 4-5, [0019], and “[0041] Graphics subsystem 615 may perform processing of images such as… video for display…. An analog or digital interface may be used to communicatively couple graphics subsystem 615 and display 620. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques….”, and [0044]);
storing the received pixel data as a plurality of frames in a plurality of frame buffers (112 and 114 in FIG. 1) associated with the plurality of displays, respectively (Janus: see FIG. 1 and “[0015]… [M]emory 106 may contain two frame buffers 112 (frame buffer A) and 114 (frame buffer B) that store image data corresponding to frame data 116 (frame data A) and 118 (frame data B), respectively…. [F]rame data 116 may correspond to image content to be displayed on display 108, while frame data 118 may correspond to different image content to be displayed on display 110.”, see also FIGs. 2 and 4-5, [0046], and [0053].).
However, it is noted that Janus does not teach:
said receiving, from the one or more image sources, the one or more streams of pixel data encoded according to a first video coding format associated with the plurality of displays;
decoding the one or more streams of pixel data as decoded pixel data;
said storing the decoded pixel data as a plurality of frames in the plurality of frame buffers associated with the plurality of displays, respectively;
encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to a second video coding format associated with a docking station coupled to the plurality of displays.
Luo teaches:
receiving, from one or more image sources (1130 and 1140 in FIG. 11), one or more streams of pixel data (e.g., streaming video) encoded according to a first video coding format associated with a plurality of displays (of 116-122 in FIG. 1) (Luo: FIGs. 1 and 11, “[0030]… [A] transcoder 110… that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence) at the remote server 106, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence. In one form, the transcoder 110 is located at the site of the server 106 (or is on server 106) and transmits the multiple encoded frame sequences, whether as a single bitstream or multiple bitstreams, to end devices. By other forms, the transcoder may be at the location of the end devices, where for example, the transcoder may be part of a business or residential gateway, set-top box (cable box), and so forth that then transmits multiple encoded video sequences of different formats to different devices. The transcoder may even be located on one of the end devices itself such as a smartphone to form a personal area network (PAN). Such a transcoder may receive a single compressed version of a video sequence, and then provides bitstreams of the video sequence re-compressed in multiple different formats. By some example arrangements, a large screen television 116 may receive the video sequence formatted for HEVC 4K or 8K 60 fps video, a smartphone 118 may receive the video sequence formatted for HEVC 720p 30 fps video, a tablet 120 may receive the video sequence formatted for 1080p HD 30 fps video, and a desk top or laptop computer 122 may receive the video sequence formatted for advanced video coding (AVC) 1080p 30 fps video. These are one of many possible example arrangements.”, “[0118]… [S]ystem 1100 includes a platform 1102 communicatively coupled to a display 1120. Platform 1102 may receive content from a content device such as content services device(s) 1130 or content delivery device(s) 1140 or other similar content sources….”, “[0128]… Examples of content may include any media information including, for example, video, music, medical and gaming information, and so forth.”, and “[0135] Platform 1102 may establish one or more logical or physical channels to communicate information. The information may include media information…. Media information may refer to any data representing content meant for a user. Examples of content may include, for example,… videoconference, streaming video,… and so forth.”, see also [0001] and “[0123] Graphics subsystem 115 may perform processing of images such as… video for display…. An analog or digital interface may be used to communicatively couple graphics subsystem 1115 and display 1120. For example, the interface may be any of a High-Definition Multimedia Interface, Display Port, wireless HDMI, and/or wireless HD compliant techniques….”);
decoding the one or more streams of pixel data as decoded pixel data (Luo: FIG. 1 and “[0030]… [A] transcoder 110… that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence)…, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence….”, see also [0001]);
storing the decoded pixel data as a plurality of frames in a frame buffer associated with the plurality of displays (Luo: FIGs. 1 and 3, “[0030]… [A] transcoder 110… that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence)…, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence….”, “[0037]… The decoder 302 may receive a bitstream of compressed video data, and may perform de-entropy coding to generate readable values for one or more frames in a frame sequence where each frame is formed of the image data (the luma and/or chroma values) in addition to any supporting data which may include prediction data such as prediction mode and motion vectors, but could also include quantization data and/or other supporting data.”, “[0038] The decoder 302 may then perform inverse quantization and transform, and residual code assembly. Then frames of image data are reconstructed either by using intra-prediction or by adding inter-prediction based predictions to the residuals. Filtering is applied to the reconstructed rough frame to generate a final de-compressed frame….”, and “[0039]… [T]he de-compressed frames may be stored in a de-compressed (or non-compressed) frame buffer 306 where the image data of the frames is accessible to an encoder 316.”, see also [0001] and “[0079]… [T]he frames may be obtained or read from memory multiple times, one time for each different encoding session that is to be established for different video formats and/or video codec formats or standards to be used….”);
encoding the plurality of frames of decoded pixel data stored in the frame buffer according to a second video coding format (Luo: FIGs. 1-3 and “[0030]… [A] transcoder 110… that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence)…, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence…. The transcoder may… be located on one of the end devices itself such as a smartphone to form a personal area network (PAN). Such a transcoder may receive a single compressed version of a video sequence, and then provides bitstreams of the video sequence re-compressed in multiple different formats. By some example arrangements, a large screen television 116 may receive the video sequence formatted for HEVC 4K or 8K 60 fps video, a smartphone 118 may receive the video sequence formatted for HEVC 720p 30 fps video, a tablet 120 may receive the video sequence formatted for 1080p HD 30 fps video, and a desk top or laptop computer 122 may receive the video sequence formatted for advanced video coding (AVC) 1080p 30 fps video. These are one of many possible example arrangements.”, “[0031] Referring to FIG. 2,… transcoder 200 may be used to perform the methods of video coding….”, “[0034]… [T]he transcoder 200 may have one or more encoders to establish multiple individual encoding sessions, one for each video format alternative that needs its own encoding session to generate video image data with a desired format….”, “[0035]… Encoding session(s) 3 (216) may provide data in different codec formats or standards such as HEVC or VP9 to name a couple of examples and that will establish images 224 that are compatible with different decoders and may offer different quality levels depending on the standard used….”, “[0037] Referring to FIG. 3, in more detail, an example transcoder (or video coding system or device) 300…. The transcoder 300 may have a decoder 302…. The decoder 302 may receive a bitstream of compressed video data, and may perform de-entropy coding to generate readable values for one or more frames in a frame sequence where each frame is formed of the image data (the luma and/or chroma values) in addition to any supporting data which may include prediction data such as prediction mode and motion vectors, but could also include quantization data and/or other supporting data.”, “[0038] The decoder 302 may then perform inverse quantization and transform, and residual code assembly. Then frames of image data are reconstructed either by using intra-prediction or by adding inter-prediction based predictions to the residuals. Filtering is applied to the reconstructed rough frame to generate a final de-compressed frame….”, and “[0039]… [T]he de-compressed frames may be stored in a de-compressed (or non-compressed) frame buffer 306 where the image data of the frames is accessible to an encoder 316.”, see also [0001], [0028]-[0029], [0079], and [0102]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Luo, to display images on devices with different video coding formats.
However, it is noted that Janus as modified by Luo does not teach:
the second video coding format is associated with a docking station coupled to the plurality of displays; and
outputting, to the docking station, the plurality of encoded frames for display on the plurality of displays, respectively.
Malemezian teaches:
a docking station (802 in FIG. 8) coupled to a plurality of displays (804 and 806 in FIG. 8) (Malemezian: FIG. 8 and “[0070]… [T]he branch device is a docking station (802)… driving two monitors (e.g., monitor1 (804), monitor2 (806))….”, see also FIGs. 1-2, “[0026]… The branch device (106) takes external input interface content and transports the content to an external output interface. For example, the branch device (106) may be a docking station….”, and “[0031]… As shown in FIG. 2, the system may include multiple display devices (e.g., display device 1 (208), display device N (210)) connected via a branch device (206) to a source device (202)….”); and
outputting, to the docking station, pixel data for display on the plurality of displays, respectively (Malemezian: FIG. 8, “[0070]… [T]he branch device is a docking station (802)… driving two monitors (e.g., monitor1 (804), monitor2 (806))…. The docking station (802) may further be connected to a laptop (808) as the source device….”, “[0074]… [T]he source device produces original video streams….”, and “[0075] Before sending the video streams to the display device ports….”, see also FIGs. 1-2, “[0026] A branch device (106) is interposed between and coupled to both the display device (104) and the source device (102).… The branch device (106) takes external input interface content and transports the content to an external output interface. For example, the branch device (106) may be a docking station….”, and “[0029] Continuing with FIG. 1, an original video stream (112) is a video stream that is output of the source device (102). In particular, video signals of the original video stream (112) is as transmitted from the source device (102) to the display device (104)….”).
Before the effective filing date of the claimed invention, it would have been obvious to include: the features taught by Malemezian, such that Janus as modified teaches: receiving, from one or more image sources, one or more streams of pixel data encoded according to a first video coding format associated with a plurality of displays (receiving, one or more image sources, one or more streams of pixel data, and associated with a plurality of displays of Janus combined with receiving, one or more image sources, one or more streams of pixel data, encoded according to a first video coding format, and associated with a plurality of displays of Luo); decoding the one or more streams of pixel data as decoded pixel data (one or more streams of pixel data of Janus combined with decoding, one or more streams of pixel data, and decoded pixel data of Luo); storing the decoded pixel data as a plurality of frames in a plurality of frame buffers associated with the plurality of displays, respectively (storing, received pixel data, plurality of frames, plurality of frame buffers associated, and plurality of displays of Janus combined with storing, decoded pixel data, plurality of frames, frame buffer, associated, and plurality of displays of Luo); encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to a second video coding format associated with a docking station coupled to the plurality of displays (plurality of frames of received pixel data, stored, plurality of frame buffers, and plurality of displays of Janus combined with encoding, plurality of frames of decoded pixel data, stored, frame buffer, second video coding format of Luo combined with the docking station, coupled, and plurality of displays of Malemezian); and outputting, to the docking station, the plurality of encoded frames for display on the plurality of displays, respectively (plurality of frames and plurality of displays of Janus combined with the plurality of encoded frames and plurality of displays of Luo and outputting, docking station, pixel data, and plurality of displays of Malemezian), for “branch device bandwidth management for video streams.” (Malemezian: [0020]).
Regarding claim 4, Janus as modified by Luo and Malemezian teaches:
The method of claim 1, wherein each of the plurality of frame buffers is separate from an operating system of the mobile computing device (Janus: see FIG. 1 and “[0015]… [M]emory 106 may contain two frame buffers 112 (frame buffer A) and 114 (frame buffer B) that store image data corresponding to frame data 116 (frame data A) and 118 (frame data B), respectively….”, see also FIG. 2 and “[0058] Various embodiments may be implemented using hardware elements, software elements, or a combination of both…. Examples of software may include… operating system software…. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.” It would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of suitability for the intended use to store decoded pixel data from image sources.).
Regarding claim 5, Janus as modified by Luo and Malemezian teaches:
The method of claim 1, further comprising:
adjusting a resolution of the decoded pixel data based at least in part on a resolution of each display of the plurality of displays (Janus: a resolution of the received pixel data and a resolution of displays 108 and 110; Luo: “[0030]… [A] transcoder that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence)…, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence…. Such a transcoder may receive a single compressed version of a video sequence, and then provides bitstreams of the video sequence re-compressed in multiple different formats. By some example arrangements, a large screen television 116 may receive the video sequence formatted for HEVC 4K or 8K 60 fps video, a smartphone 118 may receive the video sequence formatted for HEVC 720p 30 fps video, a tablet 120 may receive the video sequence formatted for 1080p HD 30 fps video, and a desk top or laptop computer 122 may receive the video sequence formatted for advanced video coding (AVC) 1080p 30 fps video. These are one of many possible example arrangements.”, see also FIG. 2, [0023], [0029], “[0035]… [E]ncoding session(s) 1(212) may be established for each change in resolution or scaling that is to be provided, resulting in images (or actually frame sequences) 220 with different image sizes that can be provided to different end devices for viewing….”, [0041], [0079]-[0080], and [0102] Also, it would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of suitability for the intended use to display images on different displays according to a resolution of a corresponding display.).
Regarding claim 8, Janus as modified by Luo and Malemezian teaches:
The method of claim 1, further comprising:
receiving one or more user inputs (Malemezian: keyboard or mouse inputs) via one or more input devices (Malemezian: keyboard or mouse) coupled to the docking station (Malemezian: FIG. 8 and “[0070]… The docking station (802) may further include a USB layer module for receiving an auxiliary (“AUX”) and/or fast AUX (“FAUX”) channel from the DP cable via the source device port and providing the AUX channel to USB hub. The USB hub may be used to connect to peripheral devices, such as a keyboard and mouse.”); and
updating the plurality of frames of pixel data based at least in part on the one or more user inputs (i.e., updating corresponding to inputting the one or more user inputs to select different video to be displayed on the plurality of displays 804 and 806 in FIG. 8 of Malemezian; it would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of suitability for the intended use to display selected video(s).).
The motivation to combine the references and include the claimed features is so that a user can select different video to be displayed.
Regarding claim 9, Janus as modified by Luo and Malemezian teaches:
The method of claim 8, wherein the updating of the plurality of frames of pixel data comprises:
transmitting the one or more user inputs to the one or more image sources (i.e., transmitting corresponding to selecting different video to be displayed on the plurality of displays 804 and 806 in FIG. 8 of Malemezian; it would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of suitability for the intended use to select video(s) for display.); and
receiving updated pixel data from the one or more image sources responsive to the one or more user inputs (i.e., receiving corresponding to selecting different video to be displayed on the plurality of displays 804 and 806 in FIG. 8 of Malemezian; it would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of suitability for the intended use to display selected video(s).).
Regarding claim 10, Janus is modified in the same manner and for the same reasons set forth in the discussion of claim 1 above. Thus, claim 10 is rejected under similar rationale as claim 1 above.
However, it is noted that claim 10 differs from claim 1 above in that the following are recited:
A controller for a mobile computing device, comprising:
a processing system; and
a memory storing instructions that, when executed by the processing system, causes the controller to:.
Janus as modified by Luo and Malemezian teaches:
A controller for a mobile computing device, comprising (Janus: FIGs. 3 and 6-7, “[0024] FIG. 3 illustrates a flow diagram of an example process 300 according to various implementations of the present disclosure….”, “[0033]… [A]ny one or more of the blocks of FIG. 3 may be undertaken in response to instructions provided by one or more computer program products. Such program products… providing instructions that, when executed by… a processor, may provide the functionality described herein. The computer program products may be provided in any form of computer readable medium. Thus, for example, a processor including one or more processor core(s) may undertake one or more of the blocks shown in FIG. 3 in response to instructions conveyed to the processor by a computer readable medium.”, and “[0035] FIG. 6 illustrates an example system 600 in accordance with the present disclosure. In various implementations, system 600 may be a media system although system 600 is not limited to this context. For example, system 600 may be incorporated into a… smart device (e.g., smart phone…)….”, see also FIGs. 1-2, “[0058] Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors…. Examples of software may include… instruction sets, computing code, computer code, code segments, computer code segments….”, and “[0059] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein….”; claim 1 above):
a processing system (Janus: “[0033]… [A]ny one or more of the blocks of FIG. 3 may be undertaken in response to instructions provided by one or more computer program products. Such program products… providing instructions that, when executed by… a processor, may provide the functionality described herein. The computer program products may be provided in any form of computer readable medium. Thus, for example, a processor including one or more processor core(s) may undertake one or more of the blocks shown in FIG. 3 in response to instructions conveyed to the processor by a computer readable medium.”, see also FIGs. 1-2 and 6); and
a memory storing instructions that, when executed by the processing system, causes the controller to: (Janus: “[0013] The material disclosed herein may be implemented in hardware, firmware, software, or any combination thereof. The material disclosed herein may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any medium and/or mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others.”, see also FIGs. 1-2 and 6 and “[0033]… [A]ny one or more of the blocks of FIG. 3 may be undertaken in response to instructions provided by one or more computer program products. Such program products… providing instructions that, when executed by… a processor, may provide the functionality described herein. The computer program products may be provided in any form of computer readable medium. Thus, for example, a processor including one or more processor core(s) may undertake one or more of the blocks shown in FIG. 3 in response to instructions conveyed to the processor by a computer readable medium.”).
Regarding claim 12, Janus as modified by Luo and Malemezian teaches:
The controller of claim 10, wherein the mobile computing device comprises a smartphone (Janus: FIG. 7 , “[0054]…. [D]evice 700 may be implemented as a mobile computing device having wireless capabilities….”, and “[0055] As described above, examples of a mobile computing device may include a… smart device (e.g., smart phone…)….”).
Regarding claim 15, this claim is rejected under similar rationale as claim 4 above.
Regarding claim 16, this claim is rejected under similar rationale as claim 5 above.
Regarding claim 19, this claim is rejected under similar rationale as claim 8 above.
Regarding claim 20, this claim is rejected under similar rationale as claim 9 above.
Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Janus in view of Luo, in further view of Malemezian, and in further view of Rabii et al. in US 2015/0334388 A1 (hereinafter Rabii).
Regarding claim 2, Janus as modified by Luo and Malemezian teaches:
The method of claim 1.
However, it is noted that Janus as modified by Luo and Malemezian does not teach:
wherein the first video coding format is associated with an x264 video codec.
Rabii teaches:
wherein a first video coding format is associated with an x264 video codec (Rabii: “[0024]… H.264/MPEG-4 AVC may also be used to encode the video stream [from a transmitting device to a display device]….”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Rabii, such that Janus as modified teaches: wherein the first video coding format is associated with an x264 video codec (first video coding format of Janus as modified combined with the first video coding format of Rabii), so as to “significantly reduce the bandwidth needed to transmit video from a transmitting device to a display device….” (Rabii: [0024]).
Regarding claim 11, Janus as modified by Luo and Malemezian is further modified by Rabii in the same manner and for the same reason set forth in the discussion of claim 2 above. Thus, claim 11 is rejected under similar rationale as claim 2 above.
Claim 6-7 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Janus in view of Luo, in further view of Malemezian, and in further view of Yoon et al. in KR 10-0892763 B1 (hereinafter Yoon; an original copy and a full machine translation thereof was provided with the August 1, 2025 Office action).
Regarding claim 6, Janus as modified by Luo and Malemezian teaches:
The method of claim 1.
However, it is noted that Janus as modified by Luo and Malemezian does not teach:
further comprising:
receiving authentication data associated with a user of the mobile computing device; and
authenticating the user based on the authentication data, the pixel data being received from the one or more image sources based at least in part on authenticating the user.
Yoon teaches:
receiving authentication data (identifiable ID and password) associated with a user of a mobile computing device (110 in FIG. 3) (Yoon: FIGs. 3 and 6, p. 5, ¶ 7 (“… [T]he thin client (100)… is a mobile terminal or a docking station equipped with a mobile terminal….”) and 8th to the last ¶ (“The portable terminal [110] may be…a smart phone….”), and p. 6, 4th to the last ¶ (“… [T]he thin client logs into the server using an identifiable ID and password to connect to the server….”)); and
authenticating the user based on the authentication data, data being received from one or more sources (200 in FIG. 3) based at least in part on authenticating the user (Yoon: FIG. 6, p. 5, ¶ 6 (“… [T]hin client (100)… connects to the server (200) after going through a prescribed authentication process.”), and p. 6, 4th to the last ¶ (“… [T]he thin client logs into the server using an identifiable ID and password to connect to the server….”) and 2nd to the last ¶ (“… [T]he thin client can connect to the server and receive data stored in the server….”)).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Yoon, further comprising: receiving authentication data associated with a user of the mobile computing device (mobile computing device of Janus as modified combined with the authentical data, user, and mobile computing device of Yoon); and authenticating the user based on the authentication data, the pixel data being received from the one or more image sources based at least in part on authenticating the user (pixel data and one or more image sources of Janus as modified combined with authenticating the user, authentication data, data, and one or more sources of Yoon), to limit access to data to an authenticated user.
Regarding claim 7, Janus as modified by Luo, Malemezian, and Yoon teaches:
The method of claim 6, wherein the authentication data includes biometric data, a password, or a security access code (Yoon: a password; FIG. 6 and p. 6, 4th to the last ¶ (“… [T]he thin client logs into the server using… [a] password to connect to the server….”)).
Regarding claim 17, Janus as modified by Luo and Malemezian is further modified by Yoon in the same manner and for the same reason set forth in the discussion of claim 6 above. Thus, claim 17 is rejected under similar rationale as claim 6 above.
Regarding claim 18, this claim is rejected under similar rationale as claim 7 above.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Janus in view of Luo, in further view of Malemezian, and in further view of Wan in WO 2022/194140 A1 (hereinafter Wan; an original copy and a full machine translation thereof was provided with the August 1, 2025 Office action).
Regarding claim 13, Janus as modified by Luo and Malemezian teaches:
The controller of claim 10.
However, it is noted that Janus as modified by Luo and Malemezian does not teach:
wherein the one or more image sources include a remote desktop server.
Wan teaches:
wherein one or more image sources include a remote desktop server (2 in FIG. 1) (Wan: FIG. 1 and “[0027] When server 2 is a remote desktop server, the scenario shown in FIG1 is a remote desktop connection scenario, and the remote desktop server is used to provide a video stream of the remote desktop to the mobile terminal 1 so that the mobile terminal can display the screen of the remote desktop….”, see also “[0026] As shown in FIG1 , the server 2 remotely provides a video stream to the mobile terminal 1, and the mobile terminal1 presents the video content according to the received video stream. Specifically, the mobile terminal 1 is an electronic device such as a smart phone….”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Wan, such that Janus as modified teaches: wherein the one or more image sources include a remote desktop server (one or more image sources of Janus as modified combined with the one or more image sources and remote desktop server of Wan), to provide images to a mobile computing device.
Claims 1 and 10 are also rejected under 35 U.S.C. 103 as being unpatentable over Thomsen in US 2007/0177677 A1 (hereinafter Thomsen) in view of Luo, in further view of Malemezian.
Regarding claim 1, Thomsen teaches:
A method of processing content (video) for multiple displays (for, e.g., a television, personal computer, and personal digital assistant) by a device (e.g., cable television set-top box), comprising (Thomsen: FIGs. 1-2, “[0003] The present disclosure is generally related to the processing of bit streams, and more specifically to the transcoding of media bit streams.”, “[0017] Although the described transcoder systems and methods could be used in a number of potential environments, FIG. 1 depicts an embodiment of a cable television distribution network 100 in which embodiments of the transcoders described herein may be used. In general, network 100 relays multimedia signals received from a number of sources, such as satellites 102, to a plurality of remote locations 104. Such multimedia signals could be, for example video and/or audio signals….”, and “[0022] Decoder 116 could be, for example, in a cable television set-top box. According to other embodiments, decoder 116 could be associated with a television, stereo system, or computing device (e.g. personal computer, laptop, personal digital assistant (PDA), etc.). Decoder 116 may receive a plurality of programs on a respective channel, each channel carried by a respective multimedia stream (which can include audio and video signals, among others).”, see also [0023], [0027], and [0040]):
receiving, from one or more image sources (video stream source(s)), one or more streams of pixel data (video stream) encoded according to a first video coding format associated with a plurality of displays (of, e.g., a television, personal computer, and personal digital assistant) (Thomsen: FIGs. 1-2, “[0017]… In general, network 100 relays multimedia signals received from a number of sources, such as satellites 102, to a plurality of remote locations 104. Such multimedia signals could be, for example video and/or audio signals…. The remote locations 104 could be residences or businesses that pay for, or otherwise receive, cable television programming. Although reference may be made generally to multimedia signals throughout the detailed description, signals having only one form of media, such as audio or video signals alone, are intended to be well within the scope of the disclosure.”, “[0018] Such multimedia signals and/or data signals may be transmitted over a down-link 106 from satellites 102 to a respective receiver 108 at a cable head-end 110. The signals received at the cable head-end 110 can be multiplexed data streams. Such data streams may comprise compressed multimedia streams transmitted in a variety of formats, such as, but not limited to, MPEG-1, MPEG-2, MPEG-4, VC-1, mp3, and/or RealAudio streams. Such compressed multimedia streams may be transmitted to the cable head-end 110 at a variety of bit rates.”, “[0019] A transcoder 112, located at the cable head-end 110, functions to decode and re-encode the individual media streams for their eventual transmission to remote locations 104. That is, it is sometimes desired to re-encode a previously encoded stream….”, and “[0022] Decoder 116 could be, for example, in a cable television set-top box. According to other embodiments, decoder 116 could be associated with a television, stereo system, or computing device (e.g. personal computer, laptop, personal digital assistant (PDA), etc.). Decoder 116 may receive a plurality of programs on a respective channel, each channel carried by a respective multimedia stream (which can include audio and video signals, among others).”, see also [0023], [0027], [0035], [0039], and [0040]);
decoding the one or more streams of pixel data as decoded pixel data (Thomsen: “[0019] A transcoder 112, located at the cable head-end 110, functions to decode… the individual media streams….”, see also “[0027] FIG. 2 depicts an embodiment of a transcoder 112a that could be used in the cable head-end (or decoder 116, etc.) of FIG. 1. A transcoder, in its simplest form, comprises a decoder for decoding the first compressed multimedia stream into an intermediate uncompressed format….”, [0032], and [0035]);
storing the decoded pixel data as a plurality of frames in a plurality of buffers (204 in FIG. 2) associated with the plurality of displays, respectively (Thomsen: see FIG. 2, “[0022] Decoder 116 could be, for example, in a cable television set-top box. According to other embodiments, decoder 116 could be associated with a television, stereo system, or computing device (e.g. personal computer, laptop, personal digital assistant (PDA), etc.). Decoder 116 may receive a plurality of programs on a respective channel, each channel carried by a respective multimedia stream (which can include audio and video signals, among others).”, ““[0035]… [D]ecoded-data buffer 204, which holds the decoded, uncompressed frequency domain data 216….”, and “[0039]… [T]ranscoder 112a includes a 1:1 relationship between incoming streams, buffer memory blocks, and transcoding processors. Thus, if one-thousand streams are to be processed, among other redundancies, an equal number of decoded data buffers 104, re-encoded data buffers 110, parameter buffers 126, decoders 102, requantization elements 106, and variable-length encoders 108 are used.”);
encoding the plurality of frames of decoded pixel data stored in the plurality of buffers according to a second video coding format (Thomsen: “[0019] A transcoder 112, located at the cable head-end 110, functions to decode and re-encode the individual media streams for their eventual transmission to remote locations 104. That is, it is sometimes desired to re-encode a previously encoded stream….” and “[0027] FIG. 2 depicts an embodiment of a transcoder 112a that could be used in the cable head-end (or decoder 116, etc.) of FIG. 1. A transcoder, in its simplest form, comprises a decoder for decoding the first compressed multimedia stream into an intermediate uncompressed format, followed by an encoder for encoding and compressing the audio and/or video from the intermediate uncompressed format to a second compressed format. In some instances this approach may be all that is needed for efficiently transcoding from one format to another (e.g. MPEG-2 to MPEG-4)….”, see also [0018], [0023], [0032], [0037], and “[0040]… [O]ther transcoding operations are intended to be within the scope of the present disclosure. Such operations could include converting the underlying multimedia streams from one format to another (i.e. MPEG-2 to MPEG-4; MPEG-4 to VC-1, etc.). Various analysis and operations may be performed on the underlying multimedia content in the process of decoding and re-encoding. The specifics of this analysis and operations are well within the skill in the art and are outside of the scope of this disclosure.”); and
outputting the plurality of encoded frames for display on the plurality of displays, respectively (Thomsen: FIGs. 1-2, “[0019] A transcoder 112, located at the cable head-end 110, functions to decode and re-encode the individual media streams for their eventual transmission to remote locations 104. That is, it is sometimes desired to re-encode a previously encoded stream….”, “[0021] Once the multimedia streams have been transcoded using transcoder 112, the streams can be transmitted over communication connection 114 to one or more decoders 116 at the remote location 104…. Decoder 116 can, for example, decode and extract the multimedia signals from the transcoded streams for playback on a playback device 118. Playback device could be, for example, a television or audio playback system.”, “[0022] Decoder 116 could be, for example, in a cable television set-top box. According to other embodiments, decoder 116 could be associated with a television, stereo system, or computing device (e.g. personal computer, laptop, personal digital assistant (PDA), etc.). Decoder 116 may receive a plurality of programs on a respective channel, each channel carried by a respective multimedia stream (which can include audio and video signals, among others).”, and “[0023] Although the transcoder 112 may be described in certain embodiments as being part of the cable head-end 110, the transcoder could also be used in a number of other locations, such as in decoder 116. For example, according to such an embodiment, decoder 116 may receive a plurality of multimedia streams (e.g. representing one or more channels of audio and/or video content) over connection 114. These streams may be in an inappropriate form for decoder 116 to properly decode and provide to device 118 for playback. Thus, decoder 116 may include a transcoder similar to transcoder 112 to transform the streams into a target format that is usable for decoder 116 or playback device 118….”).
However, it is noted that Thomsen does not teach:
the plurality of buffers are a plurality of frame buffers,
but which would have been obvious to include, such that Thomsen as modified teaches: storing the decoded pixel data as a plurality of frames in a plurality of frame buffers associated with the plurality of displays, respectively, since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of their suitability for their intended use to store decoded video frames pixel data.
However, it is noted that Thomsen as modified does not teach:
said device is a mobile computing device;
said encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to the second video coding format associated with a docking station coupled to the plurality of displays; and
said outputting, to the docking station, the plurality of encoded frames for display on the plurality of displays, respectively.
Luo teaches:
A method of processing content (video) for multiple displays (of 116-122 in FIG. 1) by a mobile computing device (e.g., smartphone) (Luo: see FIGs. 1 and 3 and “[0030] The network 100 also may include a transcoder 110… that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence)…, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence…. [T]he transcoder may be at the location of the end devices, where for example, the transcoder may be part of a business or residential gateway, set-top box (cable box), and so forth that then transmits multiple encoded video sequences of different formats to different devices. The transcoder may even be located on one of the end devices itself such as a smartphone to form a personal area network (PAN). Such a transcoder may receive a single compressed version of a video sequence, and then provides bitstreams of the video sequence re-compressed in multiple different formats. By some example arrangements, a large screen television 116 may receive the video sequence formatted for HEVC 4K or 8K 60 fps video, a smartphone 118 may receive the video sequence formatted for HEVC 720p 30 fps video, a tablet 120 may receive the video sequence formatted for 1080p HD 30 fps video, and a desk top or laptop computer 122 may receive the video sequence formatted for advanced video coding (AVC) 1080p 30 fps video. These are one of many possible example arrangements.” );
encoding a plurality of frames of decoded pixel data stored in a frame buffer according to a second video coding format (Luo: FIGs. 1-3, “[0030]… [A] transcoder 110… that has a decoder unit 112 that decodes compressed image data of frames of a frame sequence of a video (also referred to herein as a video sequence)…, and then uses an encoder unit 114 to encode the video sequence formatted to be compatible with multiple end devices for display of the video sequence…. [T]he transcoder may be at the location of the end devices, where for example, the transcoder may be part of a business or residential gateway, set-top box (cable box), and so forth that then transmits multiple encoded video sequences of different formats to different devices. The transcoder may even be located on one of the end devices itself such as a smartphone to form a personal area network (PAN). Such a transcoder may receive a single compressed version of a video sequence, and then provides bitstreams of the video sequence re-compressed in multiple different formats. By some example arrangements, a large screen television 116 may receive the video sequence formatted for HEVC 4K or 8K 60 fps video, a smartphone 118 may receive the video sequence formatted for HEVC 720p 30 fps video, a tablet 120 may receive the video sequence formatted for 1080p HD 30 fps video, and a desk top or laptop computer 122 may receive the video sequence formatted for advanced video coding (AVC) 1080p 30 fps video. These are one of many possible example arrangements.”, and “[0039]… [T]he de-compressed frames may be stored in a de-compressed (or non-compressed) frame buffer 306 where the image data of the frames is accessible to an encoder 316.”, see also [0001], [0028]-[0029], [0031], [0034]-[0035], [0037]-[0038], [0079], and [0102]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Luo, so a user can send video images from a mobile device to multiple connected display devices.
However, it is noted that Thomsen as modified by Luo does not teach:
the second video coding format is associated with a docking station coupled to the plurality of displays; and
said outputting, to the docking station.
Malemezian teaches:
a docking station (802 in FIG. 8) coupled to a plurality of displays (804 and 806 in FIG. 8) (Malemezian: FIG. 8 and “[0070]… [T]he branch device is a docking station (802)… driving two monitors (e.g., monitor1 (804), monitor2 (806))….”, see also FIGs. 1-2, “[0026]… The branch device (106) takes external input interface content and transports the content to an external output interface. For example, the branch device (106) may be a docking station….”, and “[0031]… As shown in FIG. 2, the system may include multiple display devices (e.g., display device 1 (208), display device N (210)) connected via a branch device (206) to a source device (202)….”); and
outputting, to the docking station, pixel data for display on the plurality of displays, respectively (Malemezian: FIG. 8, “[0070]… [T]he branch device is a docking station (802)… driving two monitors (e.g., monitor1 (804), monitor2 (806))…. The docking station (802) may further be connected to a laptop (808) as the source device….”, “[0074]… [T]he source device produces original video streams….”, and “[0075] Before sending the video streams to the display device ports….”, see also FIGs. 1-2, “[0026] A branch device (106) is interposed between and coupled to both the display device (104) and the source device (102).… The branch device (106) takes external input interface content and transports the content to an external output interface. For example, the branch device (106) may be a docking station….”, and “[0029] Continuing with FIG. 1, an original video stream (112) is a video stream that is output of the source device (102). In particular, video signals of the original video stream (112) is as transmitted from the source device (102) to the display device (104)….”).
Before the effective filing date of the claimed invention, it would have been obvious to include: the features taught by Malemezian, such that Thomsen as modified teaches: A method of processing content for multiple displays by a mobile computing device, comprising (method of Thomsen combined with the method of Luo): encoding the plurality of frames of decoded pixel data stored in the plurality of frame buffers according to a second video coding format associated with a docking station coupled to the plurality of displays (encoding of Thomsen as modified combined with encoding of Luo and the docking station of Malemezian); and outputting, to the docking station, the plurality of encoded frames for display on the plurality of displays, respectively (outputting of Thomsen as modified combined with the docking station of Malemezian), for “branch device bandwidth management for video streams.” (Malemezian: [0020]).
Regarding claim 10, Thomsen is modified in the same manner and for the same reasons set forth in the discussion of claim 1 immediately above. Thus, claim 10 is rejected under similar rationale as claim 1 immediately above.
However, it is noted that claim 10 differs from claim 1 immediately above in that the following are recited:
A controller for a mobile computing device, comprising:
a processing system; and
a memory storing instructions that, when executed by the processing system, causes the controller to:.
Thomsen as modified by Luo and Malemezian teaches:
A controller for a mobile computing device, comprising (Thomsen: A controller for a device, comprising; “[0024] Now that a number of potential non-limiting environments have been described within which the disclosed transcoder systems and methods can be used, attention is now directed to various exemplary embodiments of such transcoder systems and methods. It should be understood that any of the methods or processing described herein could be implemented within hardware, software, or any combination thereof. For example, when processing or process steps are implemented in software, it should be noted that such steps to perform the processing can be stored on any computer-readable medium for use by, or in connection with, any computer-related system or method…. [A] computer-readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by, or in connection with, a computer related system or method. The methods can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.”; claim 1 immediately above (mobile computing device)):
a processing system (Thomsen: [0024]); and
a memory storing instructions that, when executed by the processing system, causes the controller to: (Thomsen: [0024]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to K. Kiyabu whose telephone number is (571) 270-7836. The examiner can normally be reached Monday to Thursday 9:00 A.M. - 5:00 P.M. EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae, can be reached at (571) 272-3017. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/K. K./
Examiner, Art Unit 2626
/TEMESGHEN GHEBRETINSAE/Supervisory Patent Examiner, Art Unit 2626 3/2/26