DETAILED ACTIONS
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim this application being a Continuation of the International Application No. PCT/CN2022/100496, filed on June 22, 2022, and benefit of foreign priority from United Kingdom Patent Application No. CN202110786721.X filed on July 12, 2021.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 10/07/2024, 01/29/2025, and 07/18/2025 were reviewed and the listed references were noted.
Drawings
The 20-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-20 are pending.
Response to Amendment
The amendment filed 02/27/2022 has been entered. Claims 1-20 remain pending in the
application. Claims 1, 4, 11-12, 15-16, and 20 are amended.
Response to Arguments
Applicant’s arguments with respect to claims 1, 12, and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 5, 7-8, 11-12, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dimitrov et al., (US 2019/0342555 A1), hereinafter referred to as Dimitrov, in view of Liu et al., "Enhancing Video Encoding for Cloud Gaming Using Rendering Information” (2015), hereinafter referred to as Liu.
Claim 1
Dimitrov discloses a data processing method (Dimitrov, Fig. 1), comprising:
before an encoder encodes an image (Dimitrov, Fig. 1, video encoder 112) rendered by a graphics processing unit (Dimitrov, [0047], “VM's can be created where the CPU and GPU's are allocated to the VMs to provide server-based rendering.”), obtaining, by a graphics processing unit ([0047], “VM's can be created where the CPU and GPU's are allocated to the VMs to provide server-based rendering.”), a rendered image (Dimitrov, [0035], “Real-time rendering can be used to render images or frames that are then encoded to form a video that can be delivered to client computing devices for display to one or more users”) and rendering information related to the rendered image (Dimitrov, [0055], “the memory 214 includes a low-resolution color buffer 215 that stores the color samples (and values thereof) of the rendered image and a high-resolution depth buffer 216 that stores the depth samples (and values thereof) of the rendered image. The color samples are sampled at a first resolution and the depth samples are sampled at a second resolution that is higher than the first resolution.”, [0080], “the image rendered at the step 410 is upscaled to a third resolution, which is higher than the first resolution, at step 420. The step 420 may be performed by an upscaling engine executed by a processor, such as the upscaling engine 320 in FIG. 3. The third resolution may be as high as the second resolution, i.e., the resolution of the depth samples. In the illustrated embodiment, the rendered image is upscaled using the generated color samples and the connection information between the generated depth samples of the rendered image. The connection information may be in the form of a connectivity map”), wherein the rendering information includes information used to assist the graphics processing unit to obtain the rendered image through rendering and/or information generated by the graphics processing unit in a process of obtaining the rendered image through rendering (Dimitrov, [0055], “the memory 214 includes a low-resolution color buffer 215 that stores the color samples (and values thereof) of the rendered image and a high-resolution depth buffer 216 that stores the depth samples (and values thereof) of the rendered image. The color samples are sampled at a first resolution and the depth samples are sampled at a second resolution that is higher than the first resolution.”);
compressing, by the graphics processing unit, the rendering information to obtain compressed rendering information (Dimitrov, [0007], “a compressor, coupled to the high resolution depth buffer, configured to compress a depth information from the high resolution depth buffer into a filter map to indicate connection relationships between the color pixels of the low resolution color buffer,”, [0041], “A pixel connectivity map can be obtained from the depth buffer and then compressed to a filter map, which indicates to the client computing device how the color buffer should be filtered”); and
transmitting the rendered image and the compressed rendering information to an encoder (Dimitrov, [0050], “The video encoder 112 encodes the rendered images into a video for transmission. The video encoder 112 can also provide additional functions such as reformatting and image processing. The encoded rendered images are then provided to the video transmitter 113 and sent to the client computing devices 120-160”, [0141], “A low resolution color buffer 910 can be determined and then passed to a video encoder 912. A high resolution depth buffer 915 can be determined and then the depth buffer can be partitioned to create a filter map 917. The result can then be passed to a lossless encoder.”, the filter map is analogous to the compressed rendering information).
Dimitrov does not explicitly disclose wherein the compressed rendering information is used by the encoder to perform encoding optimization in a process of encoding the rendered image.
However, Liu teaches wherein the compressed rendering information (Dimitrov teaches compressing the rendering information prior to encoding) is used by the encoder to perform encoding optimization in a process of encoding the rendered image (Liu, Section II, “our method takes a different approach, i.e., to use rendering information to optimize the motion estimation and mode selection process of H.264/Advanced Video Coding (AVC) encoding”, Section III, “The proposed rendering-based prioritized encoding technique will first utilize rendering information such as pixel depth to compute the importance of different regions of game frame, convert this rendering information to an MB-level saliency map, and finally, find the optimal encoding parameter [in this paper, we choose quantization parameter (QP)] of each MB. The task of finding the optimal QP values for each MB is formed as an optimization problem. The optimal QP values are selected such that given the available bandwidth limit (bit rate budget), the perceptual video quality is maximized and the resulting video bit rate does not exceed the bandwidth limit.”, “Moreover, the selected bit rate target and the generated MB-level saliency map will be used as inputs to a bit rate allocation algorithm, which is responsible for deciding the optimal QP of each MB, such that the overall perceptual video quality is maximized and the encoding bit rate target is met. Finally, the output of the bit rate allocation module, a set of the QP values for each MB, will be passed to the quantization module of the encoder and the game frame will be encoded using these QP values.”).
Dimitrov and Liu are both considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov to incorporate the teachings of Liu wherein the compressed rendering information is used by the encoder to perform encoding optimization in a process of encoding the rendered image. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to reduce the computational complexity of video encoding (Liu, Abstract).
Claim 2
The combination of Dimitrov in view of Liu disclose the method according to claim 1 (Dimitrov, Fig. 1), wherein the compressing the rendering information (Dimitrov, [0007], “a compressor, coupled to the high resolution depth buffer, configured to compress a depth information from the high resolution depth buffer into a filter map to indicate connection relationships between the color pixels of the low resolution color buffer,”, [0041], “A pixel connectivity map can be obtained from the depth buffer and then compressed to a filter map, which indicates to the client computing device how the color buffer should be filtered”) comprises: reducing a resolution of the rendering information; and/or reducing a bit depth of the rendering information (Dimitrov, [0041], “A pixel connectivity map can be obtained from the depth buffer and then compressed to a filter map, which indicates to the client computing device how the color buffer should be filtered. Typically, a pixel connectivity map contains eight bits per depth value, indicating the connectivity in the eight possible directions (shown in FIG. 5C, starting with pixel 570-X and examining the path to the eight pixels 570-C(x)). The filter map takes as input the pixel connectivity map for a two by two (2×2) non-overlapping tile of pixels and compresses the data into fewer bits for each 2×2 tile. The filter map can be generated by determining the number of connected pixels of the target pixel. If there are 0 or 4 pixels connected, then an interpolator is used for the pixel, using a 1×4 or 4×4 filter kernel. If there is 1 pixel connected, then the value of the connected pixel is used as the value of the target pixel. If there are 2 pixels connected, the mean of the 2 connected pixels are used as the value of the target pixel. If there are 3 pixels connected, the mean of the 2 pixels on the long diagonal is used as the value for the target pixel.”, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel.”).
Claim 5
The combination of Dimitrov in view of Liu disclose the method according to claim 2 (Dimitrov, Fig. 1), wherein a value of the bit depth of the rendering information is a first bit depth value; and the reducing a bit depth of the rendering information comprises: obtaining a second bit depth value, wherein the second bit depth value is less than the first bit depth value; and converting the bit depth of the rendering information from the first bit depth value to the second bit depth value (Dimitrov, [0041], “A pixel connectivity map can be obtained from the depth buffer and then compressed to a filter map, which indicates to the client computing device how the color buffer should be filtered. Typically, a pixel connectivity map contains eight bits per depth value, indicating the connectivity in the eight possible directions (shown in FIG. 5C, starting with pixel 570-X and examining the path to the eight pixels 570-C(x)). The filter map takes as input the pixel connectivity map for a two by two (2×2) non-overlapping tile of pixels and compresses the data into fewer bits for each 2×2 tile. The filter map can be generated by determining the number of connected pixels of the target pixel. If there are 0 or 4 pixels connected, then an interpolator is used for the pixel, using a 1×4 or 4×4 filter kernel. If there is 1 pixel connected, then the value of the connected pixel is used as the value of the target pixel. If there are 2 pixels connected, the mean of the 2 connected pixels are used as the value of the target pixel. If there are 3 pixels connected, the mean of the 2 pixels on the long diagonal is used as the value for the target pixel.”, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel.”).
Claim 7
The combination of Dimitrov in view of Liu disclose the method according to claim 1 (Dimitrov, Fig. 1), wherein the transmitting the compressed rendering information (Dimitrov, [0050], “The video encoder 112 encodes the rendered images into a video for transmission. The video encoder 112 can also provide additional functions such as reformatting and image processing. The encoded rendered images are then provided to the video transmitter 113 and sent to the client computing devices 120-160”, [0141], “A low resolution color buffer 910 can be determined and then passed to a video encoder 912. A high resolution depth buffer 915 can be determined and then the depth buffer can be partitioned to create a filter map 917. The result can then be passed to a lossless encoder.”) comprises: dividing the compressed rendering information into a plurality of information blocks; and transmitting the plurality of information blocks separately (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”).
Claim 8
The combination of Dimitrov in view of Liu disclose the method according to claim 1 (Dimitrov, Fig. 1), wherein the method further comprises: dividing the rendering information into a plurality of information blocks (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels.); the compressing the rendering information comprises: compressing the plurality of information blocks separately (Dimitrov, [0041], “A pixel connectivity map can be obtained from the depth buffer and then compressed to a filter map, which indicates to the client computing device how the color buffer should be filtered. Typically, a pixel connectivity map contains eight bits per depth value, indicating the connectivity in the eight possible directions (shown in FIG. 5C, starting with pixel 570-X and examining the path to the eight pixels 570-C(x)). The filter map takes as input the pixel connectivity map for a two by two (2×2) non-overlapping tile of pixels and compresses the data into fewer bits for each 2×2 tile. The filter map can be generated by determining the number of connected pixels of the target pixel. If there are 0 or 4 pixels connected, then an interpolator is used for the pixel, using a 1×4 or 4×4 filter kernel.”); and the transmitting the compressed rendering information comprises: transmitting the plurality of compressed information blocks separately (Dimitrov, [0124], “Proceeding to a step 730, the anchor, i.e. original, pixel from each 2×2 color tile can be represented by a single bit indicating whether that 2×2 tile should have a sharpening algorithm applied. The 2×2 tile can then be encoded utilizing a filter or pixel connectivity map for the remaining 3 pixels. The filter or pixel connectivity map results in twelve different values that can be encoded: one value is for applying a compression algorithm and request sharpening, one value is for applying a compression algorithm with no sharpening, four values for using one of four adjacent pixels without sharpening, and six values where two of four adjacent pixels are used without sharpening. This can reduce the required number of bits for encoding the 2×2 tile to twelve bits. The method 700 then proceeds to the step 722. In another embodiment, four of the twelve values can be removed as not affecting the visual output to a degree noticeable by a user. Therefore, only 8 values need to be encoded. This results in a total of 10 bits per 2×2 tile.”).
Claim 11
The combination of Dimitrov in view of Liu disclose the method according to claim 1 (Dimitrov, Fig. 1), wherein the transmitting the compressed rendering information to the encoder (Dimitrov, [0141], “A low resolution color buffer 910 can be determined and then passed to a video encoder 912. A high resolution depth buffer 915 can be determined and then the depth buffer can be partitioned to create a filter map 917. The result can then be passed to a lossless encoder.”, the filter map is analogous to the compressed rendering information).comprises: transmitting the compressed rendering information to an analysis module (Liu, Fig. 1, Bit-rate allocation module, Dimitrov teaches compressing the rendering information prior to transmitting to an encoder), wherein the analysis module performs analysis based on the compressed rendering information (Liu, Section III, “The proposed rendering-based prioritized encoding technique will first utilize rendering information such as pixel depth to compute the importance of different regions of game frame, convert this rendering information to an MB-level saliency map, and finally, find the optimal encoding parameter [in this paper, we choose quantization parameter (QP)] of each MB”, Dimitrov teaches compressing the rendering information prior to transmitting to an encoder), determines encoding optimization information (Liu, Section III, “The task of finding the optimal QP values for each MB is formed as an optimization problem. The optimal QP values are selected such that given the available bandwidth limit (bit rate budget), the perceptual video quality is maximized and the resulting video bit rate does not exceed the bandwidth limit.”), and transmits the encoding optimization information to the encoder (Liu, Fig. 1, the output of Bit-rate allocation module layer is transmitted to Quantization), and wherein the encoding optimization information is used by the encoder to perform encoding optimization in a process of encoding the rendered image (Liu, Section III, “Moreover, the selected bit rate target and the generated MB-level saliency map will be used as inputs to a bit rate allocation algorithm, which is responsible for deciding the optimal QP of each MB, such that the overall perceptual video quality is maximized and the encoding bit rate target is met. Finally, the output of the bit rate allocation module, a set of the QP values for each MB, will be passed to the quantization module of the encoder and the game frame will be encoded using these QP values.). The proposed combination as well as the motivation for combining the Dimitrov and Liu references presented in the rejection of Claim 1, apply to Claim 11 and are incorporated herein by reference. Thus, the method recited in Claim 11 is met by Dimitrov and Liu.
Claim 12
Dimitrov discloses a data processing method (Dimitrov, Fig. 1), comprising:
obtaining, by a graphics processing unit (Dimitrov, [0047], “VM's can be created where the CPU and GPU's are allocated to the VMs to provide server-based rendering.”), a rendered image (Dimitrov, [0035], “Real-time rendering can be used to render images or frames that are then encoded to form a video that can be delivered to client computing devices for display to one or more users”) and rendering information related to the rendered image (Dimitrov, [0055], “the memory 214 includes a low-resolution color buffer 215 that stores the color samples (and values thereof) of the rendered image and a high-resolution depth buffer 216 that stores the depth samples (and values thereof) of the rendered image. The color samples are sampled at a first resolution and the depth samples are sampled at a second resolution that is higher than the first resolution.”, [0080], “the image rendered at the step 410 is upscaled to a third resolution, which is higher than the first resolution, at step 420. The step 420 may be performed by an upscaling engine executed by a processor, such as the upscaling engine 320 in FIG. 3. The third resolution may be as high as the second resolution, i.e., the resolution of the depth samples. In the illustrated embodiment, the rendered image is upscaled using the generated color samples and the connection information between the generated depth samples of the rendered image. The connection information may be in the form of a connectivity map”), wherein the rendering information includes information used to assist the graphics processing unit to obtain the rendered image through rendering and/or information generated by the graphics processing unit in a process of obtaining the rendered image through rendering (Dimitrov, [0055], “the memory 214 includes a low-resolution color buffer 215 that stores the color samples (and values thereof) of the rendered image and a high-resolution depth buffer 216 that stores the depth samples (and values thereof) of the rendered image. The color samples are sampled at a first resolution and the depth samples are sampled at a second resolution that is higher than the first resolution.”);
dividing, by the graphics processing unit ([0047], “VM's can be created where the CPU and GPU's are allocated to the VMs to provide server-based rendering.”), the rendering information into a plurality of information blocks (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”);
compressing, by the graphics processing unit, the rendering information to obtain compressed rendering information (Dimitrov, [0007], “a compressor, coupled to the high resolution depth buffer, configured to compress a depth information from the high resolution depth buffer into a filter map to indicate connection relationships between the color pixels of the low resolution color buffer,”, [0041], “A pixel connectivity map can be obtained from the depth buffer and then compressed to a filter map, which indicates to the client computing device how the color buffer should be filtered”); and
transmitting the rendered image to an encoder (Dimitrov, [0050], “The video encoder 112 encodes the rendered images into a video for transmission. The video encoder 112 can also provide additional functions such as reformatting and image processing. The encoded rendered images are then provided to the video transmitter 113 and sent to the client computing devices 120-160”);
transmitting the plurality of information blocks to the encoder separately (Dimitrov, [0141], “A low resolution color buffer 910 can be determined and then passed to a video encoder 912. A high resolution depth buffer 915 can be determined and then the depth buffer can be partitioned to create a filter map 917. The result can then be passed to a lossless encoder.”, the filter map is analogous to the compressed rendering information, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel.).
Dimitrov does not explicitly disclose wherein the information blocks are used by the encoder to perform encoding optimization in a process of encoding the rendered image.
However, Liu teaches wherein the information blocks (Dimitrov teaches compressing the rendering information and partitioning them prior to encoding) are used by the encoder to perform encoding optimization in a process of encoding the rendered image (Liu, Section II, “our method takes a different approach, i.e., to use rendering information to optimize the motion estimation and mode selection process of H.264/Advanced Video Coding (AVC) encoding”, Section III, “The proposed rendering-based prioritized encoding technique will first utilize rendering information such as pixel depth to compute the importance of different regions of game frame, convert this rendering information to an MB-level saliency map, and finally, find the optimal encoding parameter [in this paper, we choose quantization parameter (QP)] of each MB. The task of finding the optimal QP values for each MB is formed as an optimization problem. The optimal QP values are selected such that given the available bandwidth limit (bit rate budget), the perceptual video quality is maximized and the resulting video bit rate does not exceed the bandwidth limit.”, “Moreover, the selected bit rate target and the generated MB-level saliency map will be used as inputs to a bit rate allocation algorithm, which is responsible for deciding the optimal QP of each MB, such that the overall perceptual video quality is maximized and the encoding bit rate target is met. Finally, the output of the bit rate allocation module, a set of the QP values for each MB, will be passed to the quantization module of the encoder and the game frame will be encoded using these QP values.”).
Dimitrov and Liu are both considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov to incorporate the teachings of Liu wherein the information blocks are used by the encoder to perform encoding optimization in a process of encoding the rendered image. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to reduce the computational complexity of video encoding (Liu, Abstract).
Claims 16-17 and 19-20 are rejected for similar reasons as those described in claims 1-2, 5, and 11. The additional elements in Claims 16-17 and 19-20 (the combination of Dimitrov in view of Liu) discloses includes: a data processing system (Dimitrov, Fig. 1), comprising a graphics processing unit (Dimitrov, [0047], “VM's can be created where the CPU and GPU's are allocated to the VMs to provide server-based rendering.”) and an encoder (Dimitrov, Fig. 1, video encoder 112). The proposed combination as well as the motivation for combining the Dimitrov and Liu references presented in the rejection of Claim 1, apply to Claims 16-17 and 19-20 and are incorporated herein by reference. Thus, the system recited in Claims 16-17 and 19-20 is met by Dimitrov and Liu.
Claims 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Dimitrov in view of Liu in view of Wang et al., (US 2021/0272327 A1), hereinafter referred to as Wang.
Claim 3
The combination of Dimitrov in view of Liu disclose the method according to claim 2 (Dimitrov, Fig. 1).
The combination of Dimitrov in view of Liu does not explicitly disclose wherein the reducing a resolution of the rendering information comprises: obtaining sampling ratio information, wherein the sampling ratio information comprises horizontal sampling ratio information and vertical sampling ratio information and performing downsampling on the rendering information in a horizontal dimension and a vertical dimension based on the horizontal sampling ratio information and the vertical sampling ratio information.
However, Wang teaches wherein the reducing a resolution of the rendering information (Wang, [0052], “An example of this would be for the case where the array of data elements represents an array of texture data that is to be used, e.g., by a graphics processor when generating a render output. For instance, when processing such texture data, it is known to store a series of progressively lower resolution (i.e. downscaled) representations of the same image (referred to as texture “mipmaps”) in order to improve the rendering speed and/or reduce the processing burden on the renderer (circuit) of the graphics processor. “, [0053], “Embodiments of the technology described herein, in effect, allow such downsampled representations of the original array of data elements (e.g. texture mipmaps where the array is an array of texture data) to be obtained for ‘free’, i.e. directly from the tree representation. This is achieved in the technology described herein by estimating a data value to be used for a child node (or set of child nodes) at a particular level of the tree based on the node values for the preceding parent nodes in the tree (but not using the node value for the child node(s) in question), and then using the bit count data for the child node(s) at least at the level of the tree in question (and in embodiments also the bit count data for any lower-level child nodes in the tree going down to the level of the leaf nodes) to approximate the contribution from the child node(s) in question”) comprises: obtaining sampling ratio information, wherein the sampling ratio information comprises horizontal sampling ratio information and vertical sampling ratio information (Wang, Fig. 2, shows the downsampling ratio which includes both vertical and horizontal sampling ratio, [0165], “As shown in FIG. 2, the quadtree 45 representing the 4×4 array of data elements 40 has a root node 41, which has four child nodes 42, each corresponding to a respective 2×2 block 48 of the 4×4 block 40 of data elements. Each such child node 42 of the quadtree 45 then has 4 child nodes which form the leaf nodes 43 of the quadtree 45. The leaf nodes 43 of the quadtree 45 each correspond to a respective individual data element 49 of the 4×4 block 40 of data elements.”) and performing downsampling on the rendering information in a horizontal dimension and a vertical dimension based on the horizontal sampling ratio information and the vertical sampling ratio information (Wang, Fig. 2, [0053], “Embodiments of the technology described herein, in effect, allow such downsampled representations of the original array of data elements (e.g. texture mipmaps where the array is an array of texture data) to be obtained for ‘free’, i.e. directly from the tree representation. This is achieved in the technology described herein by estimating a data value to be used for a child node (or set of child nodes) at a particular level of the tree based on the node values for the preceding parent nodes in the tree (but not using the node value for the child node(s) in question), and then using the bit count data for the child node(s) at least at the level of the tree in question (and in embodiments also the bit count data for any lower-level child nodes in the tree going down to the level of the leaf nodes) to approximate the contribution from the child node(s) in question”).
Dimitrov, Liu, and Wang are all considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov and Liu to incorporate the teachings of Wang wherein the reducing a resolution of the rendering information comprises: obtaining sampling ratio information, wherein the sampling ratio information comprises horizontal sampling ratio information and vertical sampling ratio information and performing downsampling on the rendering information in a horizontal dimension and a vertical dimension based on the horizontal sampling ratio information and the vertical sampling ratio information. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to use less processing power (Wang, [0052]).
Claim 18 is rejected for similar reasons as those described in claim 3. The additional elements in Claim 18 (the combination of Dimitrov in view of Liu in view of Wang) discloses includes: a data processing system (Dimitrov, Fig. 1), comprising a graphics processing unit (Dimitrov, [0047], “VM's can be created where the CPU and GPU's are allocated to the VMs to provide server-based rendering.”) and an encoder (Dimitrov, Fig. 1, video encoder 112). The proposed combination as well as the motivation for combining the Dimitrov, Liu, and Wang references presented in the rejection of Claim 1, apply to Claim 18 and are incorporated herein by reference. Thus, the system recited in Claim 18 is met by Dimitrov, Liu, and Wang.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Dimitrov in view of Liu in view of Wang in further view of Chan et al., “A Multiple Description Coding and Delivery Scheme for Motion-Compensated Fine Granularity Scalable Video” (2008), hereinafter referred to as Chan.
Claim 4
The combination of Dimitrov in view of Liu in view of Wang discloses the method according to claim 3 (Dimitrov, Fig. 1), wherein the obtaining sampling ratio information process (Wang, [0105], “where each tree is a quadtree and represents a 16×16, 16×4, or 8×8 block of data elements, the arrangement is such that individual 4×4 blocks can be decoded without having to decode any other 4×4 blocks, and such that 16×16, 16×4 or 8×8 blocks can be decoded independently of other 16×16, 16×4, or 8×8 blocks, respectively. In this arrangement, the minimum granularity will be decoding a single 4×4 block. This is acceptable and efficient, as this will typically correspond to the minimum amount of data that can be fetched from memory in one operation in typical memory subsystems.”).
The combination of Dimitrov in view of Liu in view of Wang does not explicitly disclose obtaining storage granularity information of the encoder for a motion vector, and determining the sampling ratio information based on the storage granularity information, wherein the storage granularity information is a resolution of a block corresponding to the motion vector in a motion estimation process; or obtaining compensation granularity information required for motion compensation of a decoder, and determining the sampling ratio information based on the compensation granularity information, wherein the storage granularity information is a resolution of a block for which the decoder needs to perform motion compensation in a motion compensation process.
However, Chan teaches obtaining storage granularity information of the encoder for a motion vector (Chan, Abstract, “Motion-compensated fine-granularity scalability (MC-FGS) with leaky prediction has been shown to provide an efficient tradeoff between compression gain and error resilience, facilitating the transmission of video over dynamic channel conditions”), and determining the sampling ratio information based on the storage granularity information (Chan, page 1354, “The motion-compensated (MC) FGS coding proposed in introduces an MCP loop in the EL (FGS layer) by using the motion vectors (MVs) and prediction modes from the BL. The tradeoff in coding efficiency and error resilience is achieved by controlling the amount of the EL (the number of bitplanes) used for the prediction., wherein the storage granularity information is a resolution of a block corresponding to the motion vector in a motion estimation process (Chan, page 1360, “The FGS property is achieved by bit plane coding. We incorporate both partial and leaky predictions into the codec with a coding scheme shown in Fig. 4. In the simulations, we apply a uniform quantization parameter (QP) value to all blocks of the BL for both I-frames and P-frames. To facilitate the studies, we set BL (the largest quantization step) so as to increase the dynamic range of the EL bitrate. The MV resolution in H.264 is set to be ¼”, MV is motion vector); or obtaining compensation granularity information required for motion compensation of a decoder, and determining the sampling ratio information based on the compensation granularity information, wherein the storage granularity information is a resolution of a block for which the decoder needs to perform motion compensation in a motion compensation process (Examiner interprets the claim to only require either one of the obtaining options, as explained above, Chan teaches the first limitation).
Dimitrov, Liu, Wang, and Chan are all considered to be analogous to the claimed invention because they are in the same field of image encoding. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov, Liu, and Wang to incorporate the teachings of Chan of obtaining storage granularity information of the encoder for a motion vector, and determining the sampling ratio information based on the storage granularity information, wherein the storage granularity information is a resolution of a block corresponding to the motion vector in a motion estimation process; or obtaining compensation granularity information required for motion compensation of a decoder, and determining the sampling ratio information based on the compensation granularity information, wherein the storage granularity information is a resolution of a block for which the decoder needs to perform motion compensation in a motion compensation process. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to provide an efficient tradeoff between compression gain and error resilience (Chan, Abstract).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Dimitrov in view of Liu in view of Shadik et al., (US 2020/0068184 A1), hereinafter referred to as Shadik.
Claim 6
The combination of Dimitrov in view of Liu disclose the method according to claim 5 (Dimitrov, Fig. 1).
The combination of Dimitrov in view of Liu does not explicitly disclose wherein the obtaining a second bit depth value comprises: obtaining a third bit depth value, wherein the third bit depth value is used to represent a bit depth of rendering information required for encoding optimization of the encoder; and determining the second bit depth value based on the first bit depth value and the third bit depth value.
However, Shadik teaches wherein the obtaining a second bit depth value comprises: obtaining a third bit depth value, wherein the third bit depth value is used to represent a bit depth of rendering information required for encoding optimization of the encoder; and determining the second bit depth value based on the first bit depth value and the third bit depth value (Shadik [0059], “FIG. 6 depicts different three-bit depth levels 602 from the perspective of vantage point 210. Depth levels 602 are depicted using an octal notation (designated herein using a prefix of “00”) in which, for example, “Depth 000” corresponds to a three-bit binary value of 0b000, “Depth 001” corresponds to a three-bit binary value of 0b001, and so forth up until “Depth 007,” which corresponds to a three-bit binary value of 0b111. While a three-bit representation is depicted in FIG. 4 for representing processed, compressed depth values, it will be understood that any suitable number of bits may be used for this purpose in other implementations. For example, in a real-world implementation where, for instance, original depth values are represented using 32 bits, a smaller number of bits such as 16 bits or 8 bits may be used to define compressed depth values.”, [0063], “system 100 may divide an original depth representation into different sections, determine different respective depth ranges needed to represent each section (e.g., depth ranges that are smaller than may be needed to represent the entire depth representation), and maximize the use of the reduced number of bits for each section by using the bits to separately cover the different ranges of each section. Examples of different sections and how a reduced number of depth bits may be separately used to preserve depth precision and/or accuracy in compressed depth representations will be described and illustrated in more detail below”).
Dimitrov, Liu, and Shadik are all considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov and Liu to incorporate the teachings of Shadik wherein the obtaining a second bit depth value comprises: obtaining a third bit depth value, wherein the third bit depth value is used to represent a bit depth of rendering information required for encoding optimization of the encoder; and determining the second bit depth value based on the first bit depth value and the third bit depth value. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to use less data (and is thus more manageable, easy to transmit, etc.). (Shadik [0058])
Claims 9, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Dimitrov in view of Liu in view of Li et al., (US 2020/0314434 A1), hereinafter referred to as Li.
Claim 9
The combination of Dimitrov in view of Liu disclose the method according to claim 7 (Dimitrov, Fig. 1), wherein the dividing the compressed rendering information into a plurality of information blocks (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”) comprises: dividing the compressed rendering information into blocks according to a preset block division manner, to obtain the plurality of information blocks (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”).
The combination of Dimitrov in view of Liu does not explicitly disclose wherein the block division manner is a manner in which the encoder divides the rendered image into a plurality of macroblocks.
However, Li teaches wherein the block division manner is a manner in which the encoder divides the rendered image into a plurality of macroblocks (Li, “The compression provided by the encoder 126 is typically lossy, so the output compressed video information lacks some of the information present in the original, rendered and uncompressed video information. The video information is typically divided into frames, and the frames are sometimes divided into macroblock, or blocks. Due to the lossy characteristic of compression, the encoder 126 determines which information of the original, rendered and uncompressed video information to remove while minimizing visual quality degradation of the scene depicted on a display device as viewed by a user. For example, the encoder 126 determines which regions of the block or the frame video information to compress with higher compression ratios and which regions to compress with lower compression ratios. In addition, the compression algorithms track the amount of data used to represent the video, which is determined by the bitrate, while also tracking the storage levels of buffers storing the compressed video information to avoid underflow and overflow conditions. Accordingly, the encoder 126 faces many challenges to support compression of the received, rendered video information while achieving a target compression ratio, minimizing latency of video transmission, preventing overflow and underflow conditions of buffers storing output data, and maximizing user subjective image quality on a display device.”).
Dimitrov, Liu, and Li are all considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov and Liu to incorporate the teachings of Li wherein the block division manner is a manner in which the encoder divides the rendered image into a plurality of macroblocks. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to minimize latency of video transmission, prevent overflow and underflow conditions of buffers storing output data, and maximize user subjective image quality on a display device (Li [0035]).
Claim 13
The combination of Dimitrov in view of Liu disclose the method according to claim 12 (Dimitrov, Fig. 1), wherein the dividing the compressed rendering information into a plurality of information blocks (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”) comprises: dividing the rendering information into blocks according to a preset block division manner, to obtain the plurality of information blocks (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”).
The combination of Dimitrov in view of Liu does not explicitly disclose wherein the block division manner is a manner in which the encoder divides the rendered image into a plurality of macroblocks.
However, Li teaches wherein the block division manner is a manner in which the encoder divides the rendered image into a plurality of macroblocks (Li, “The compression provided by the encoder 126 is typically lossy, so the output compressed video information lacks some of the information present in the original, rendered and uncompressed video information. The video information is typically divided into frames, and the frames are sometimes divided into macroblock, or blocks. Due to the lossy characteristic of compression, the encoder 126 determines which information of the original, rendered and uncompressed video information to remove while minimizing visual quality degradation of the scene depicted on a display device as viewed by a user. For example, the encoder 126 determines which regions of the block or the frame video information to compress with higher compression ratios and which regions to compress with lower compression ratios. In addition, the compression algorithms track the amount of data used to represent the video, which is determined by the bitrate, while also tracking the storage levels of buffers storing the compressed video information to avoid underflow and overflow conditions. Accordingly, the encoder 126 faces many challenges to support compression of the received, rendered video information while achieving a target compression ratio, minimizing latency of video transmission, preventing overflow and underflow conditions of buffers storing output data, and maximizing user subjective image quality on a display device.”).
Dimitrov, Liu, and Li are all considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov and Liu to incorporate the teachings of Li wherein the block division manner is a manner in which the encoder divides the rendered image into a plurality of macroblocks. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to minimize latency of video transmission, prevent overflow and underflow conditions of buffers storing output data, and maximize user subjective image quality on a display device (Li [0035]).
Claim 15
The combination of Dimitrov in view of Liu in view of Li disclose the method according to claim 13 (Dimitrov, Fig. 1), wherein the transmitting the plurality of information blocks to the encoder separately (Dimitrov, [0141], “A low resolution color buffer 910 can be determined and then passed to a video encoder 912. A high resolution depth buffer 915 can be determined and then the depth buffer can be partitioned to create a filter map 917. The result can then be passed to a lossless encoder.”, the filter map is analogous to the compressed rendering information, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel.) comprises: transmitting the plurality of information blocks to an analysis module (Liu, Fig. 1, Bit-rate allocation module, Dimitrov teaches compressing the rendering information prior to transmitting to an encoder) separately, so that the analysis module analyzes the plurality of information blocks, (Liu, Section III, “The proposed rendering-based prioritized encoding technique will first utilize rendering information such as pixel depth to compute the importance of different regions of game frame, convert this rendering information to an MB-level saliency map, and finally, find the optimal encoding parameter [in this paper, we choose quantization parameter (QP)] of each MB”, Dimitrov teaches compressing the rendering information prior to transmitting to an encoder, each rendering information is converted to macroblock level saliency map), determines encoding optimization information corresponding to each of the plurality of information blocks separately, (Liu, Section III, “The task of finding the optimal QP values for each MB is formed as an optimization problem. The optimal QP values are selected such that given the available bandwidth limit (bit rate budget), the perceptual video quality is maximized and the resulting video bit rate does not exceed the bandwidth limit.”, each rendering information is converted to macroblock level saliency map), and transmits the encoding optimization information to the encoder (Liu, Fig. 1, the output of Bit-rate allocation module layer is transmitted to Quantization), and wherein the encoding optimization information is used by the encoder to perform encoding optimization in a process of encoding the macroblocks (Liu, Section III, “Moreover, the selected bit rate target and the generated MB-level saliency map will be used as inputs to a bit rate allocation algorithm, which is responsible for deciding the optimal QP of each MB, such that the overall perceptual video quality is maximized and the encoding bit rate target is met. Finally, the output of the bit rate allocation module, a set of the QP values for each MB, will be passed to the quantization module of the encoder and the game frame will be encoded using these QP values., Section I, “which uses the homogeneity of MVs to reduce the number of candidate macroblock (MB) modes that need to be tested in the rate–distortion optimization (RDO) process, ultimately reducing the computational complexity of video encoding.”). The proposed combination as well as the motivation for combining the Dimitrov, Liu, and Li references presented in the rejection of Claim 13, apply to Claim 15 and are incorporated herein by reference. Thus, the method recited in Claim 15 is met by Dimitrov, Liu, and Li.
Claims 10 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Dimitrov in view of Liu in view of Li in view of Tung et al., (US 2010/0226441 A1), hereinafter referred to as Tung.
Claim 10
The combination of Dimitrov in view of Liu in view of Li disclose the method according to claim 9 (Dimitrov, Fig. 1), wherein the transmitting the plurality of information blocks separately (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”).
The combination of Dimitrov in view of Liu in view of Li does not explicitly disclose determining a transmission sequence of the plurality of information blocks according to a preset encoding sequence, wherein the encoding sequence is an encoding sequence set by the encoder for the plurality of macroblocks; and sequentially transmitting the plurality of information blocks according to the transmission sequence corresponding to the plurality of information blocks.
However, Tung teaches determining a transmission sequence of the plurality of information blocks according to a preset encoding sequence (Tung, [0125], “a temporal frame mode may be provided in which each client frame may occupy one time slot of the server frame sequence and one frame may be provided to the encoding engine at one time. In this embodiment, each client may have its own update/refresh rate. Each screen may further be embedded with information describing which client the frame is destined for. For example, a client with minimal updates may be relatively idle and may only need a low refresh rate. Clients with high update rates, for example a client playing a video, may be captured by being provided more time slots. For example, referring to FIG. 22, each of frames 2200 may represent a single capture frame of a plurality of capture frames. The individual frames may be apportioned to various clients in order to support refresh rates supporting the type and nature of the client activity. Referring to FIG. 23, the individual frames of frame sequence 2300 may be apportioned between frames for client 1 2330, client 2 2310, and client 3 2320. For example, frames 1-1, 1-2, and 1-3 of client 1 2330 maybe assigned to frames 1, 2, and 3 of frame sequence 2300. Frames 2-1 and 2-2 of client 2 2310 may be assigned to frames 7 and 8 of frame sequence 2300. Finally, frames 3-1, 3-2, and 3-3 of client 3 2320 may be assigned to frames 4, 5, and 6 of frame sequence 2300.”), wherein the encoding sequence is an encoding sequence set by the encoder for the plurality of macroblocks (Tung, Fig. 23, Li teaches that the data are divided into macroblocks); and sequentially transmitting the plurality of information blocks according to the transmission sequence corresponding to the plurality of information blocks (Tung, Fig. 23, [0125], “a temporal frame mode may be provided in which each client frame may occupy one time slot of the server frame sequence and one frame may be provided to the encoding engine at one time. In this embodiment, each client may have its own update/refresh rate. Each screen may further be embedded with information describing which client the frame is destined for. For example, a client with minimal updates may be relatively idle and may only need a low refresh rate. Clients with high update rates, for example a client playing a video, may be captured by being provided more time slots. For example, referring to FIG. 22, each of frames 2200 may represent a single capture frame of a plurality of capture frames. The individual frames may be apportioned to various clients in order to support refresh rates supporting the type and nature of the client activity. Referring to FIG. 23, the individual frames of frame sequence 2300 may be apportioned between frames for client 1 2330, client 2 2310, and client 3 2320. For example, frames 1-1, 1-2, and 1-3 of client 1 2330 maybe assigned to frames 1, 2, and 3 of frame sequence 2300. Frames 2-1 and 2-2 of client 2 2310 may be assigned to frames 7 and 8 of frame sequence 2300. Finally, frames 3-1, 3-2, and 3-3 of client 3 2320 may be assigned to frames 4, 5, and 6 of frame sequence 2300.”).
Dimitrov, Liu, Li, and Tung are all considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov, Liu, and Li to incorporate the teachings of Tung of determining a transmission sequence of the plurality of information blocks according to a preset encoding sequence, wherein the encoding sequence is an encoding sequence set by the encoder for the plurality of macroblocks; and sequentially transmitting the plurality of information blocks according to the transmission sequence corresponding to the plurality of information blocks. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the rendering and management of client desktops and the subsequent transmission to the remote client (Tung, Abstract).
Claim 14
The combination of Dimitrov in view of Liu in view of Li disclose the method according to claim 13 (Dimitrov, Fig. 1), wherein the transmitting the plurality of information blocks to the encoder separately (Dimitrov, [0123], “a connectivity bitmap can be created. This bitmap can indicate the connections between color pixels within the color buffer. The bitmap can be partitioned into 2×2 tiles of color pixels. The partitioned bitmap can then be encoded, for example, by using twelve bits with four additional bits indicating whether the pixel is an edge pixel. This can result in a representation of sixteen bits per 2×2 tile. This is a reduction from the thirty-two bits required to encode an uncompressed 2×2 tile (i.e. utilizing one bit for each filter map connection direction for each pixel in the 2×2 tile). Proceeding to a step 722, the 2×2 tiles determined in the step 720 (or a step 730, as appropriate) are collected into blocks of eight by eight (8×8) of the 2×2 tiles. If the 8×8 block does not contain an edge pixel, then the entire block can be denoted using one bit. Otherwise, the block data is used, without further compression, to denote the pixels. Proceeding to a step 724, a compression algorithm, for example, ZIP, or other compression algorithms, can be applied to the result of the step 722. The method 700 proceeds to a step 632.”).
The combination of Dimitrov in view of Liu in view of Li does not explicitly disclose determining a transmission sequence of the plurality of information blocks according to a preset encoding sequence, wherein the encoding sequence is an encoding sequence set by the encoder for the plurality of macroblocks; and sequentially transmitting the plurality of information blocks according to the transmission sequence corresponding to the plurality of information blocks.
However, Tung teaches determining a transmission sequence of the plurality of information blocks according to a preset encoding sequence (Tung, [0125], “a temporal frame mode may be provided in which each client frame may occupy one time slot of the server frame sequence and one frame may be provided to the encoding engine at one time. In this embodiment, each client may have its own update/refresh rate. Each screen may further be embedded with information describing which client the frame is destined for. For example, a client with minimal updates may be relatively idle and may only need a low refresh rate. Clients with high update rates, for example a client playing a video, may be captured by being provided more time slots. For example, referring to FIG. 22, each of frames 2200 may represent a single capture frame of a plurality of capture frames. The individual frames may be apportioned to various clients in order to support refresh rates supporting the type and nature of the client activity. Referring to FIG. 23, the individual frames of frame sequence 2300 may be apportioned between frames for client 1 2330, client 2 2310, and client 3 2320. For example, frames 1-1, 1-2, and 1-3 of client 1 2330 maybe assigned to frames 1, 2, and 3 of frame sequence 2300. Frames 2-1 and 2-2 of client 2 2310 may be assigned to frames 7 and 8 of frame sequence 2300. Finally, frames 3-1, 3-2, and 3-3 of client 3 2320 may be assigned to frames 4, 5, and 6 of frame sequence 2300.”), wherein the encoding sequence is an encoding sequence set by the encoder for the plurality of macroblocks (Tung, Fig. 23, Li teaches that the data are divided into macroblocks); and sequentially transmitting the plurality of information blocks according to the transmission sequence corresponding to the plurality of information blocks (Tung, Fig. 23, [0125], “a temporal frame mode may be provided in which each client frame may occupy one time slot of the server frame sequence and one frame may be provided to the encoding engine at one time. In this embodiment, each client may have its own update/refresh rate. Each screen may further be embedded with information describing which client the frame is destined for. For example, a client with minimal updates may be relatively idle and may only need a low refresh rate. Clients with high update rates, for example a client playing a video, may be captured by being provided more time slots. For example, referring to FIG. 22, each of frames 2200 may represent a single capture frame of a plurality of capture frames. The individual frames may be apportioned to various clients in order to support refresh rates supporting the type and nature of the client activity. Referring to FIG. 23, the individual frames of frame sequence 2300 may be apportioned between frames for client 1 2330, client 2 2310, and client 3 2320. For example, frames 1-1, 1-2, and 1-3 of client 1 2330 maybe assigned to frames 1, 2, and 3 of frame sequence 2300. Frames 2-1 and 2-2 of client 2 2310 may be assigned to frames 7 and 8 of frame sequence 2300. Finally, frames 3-1, 3-2, and 3-3 of client 3 2320 may be assigned to frames 4, 5, and 6 of frame sequence 2300.”).
Dimitrov, Liu, Li, and Tung are all considered to be analogous to the claimed invention because they are in the same field of image rendering. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Dimitrov, Liu, and Li to incorporate the teachings of Tung of determining a transmission sequence of the plurality of information blocks according to a preset encoding sequence, wherein the encoding sequence is an encoding sequence set by the encoder for the plurality of macroblocks; and sequentially transmitting the plurality of information blocks according to the transmission sequence corresponding to the plurality of information blocks. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to improve the rendering and management of client desktops and the subsequent transmission to the remote client (Tung, Abstract).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENISE G ALFONSO whose telephone number is (571)272-1360. The examiner can normally be reached Monday - Friday 7:30 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENISE G ALFONSO/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662