DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the. Claims 1-9, 18, 20, 25-27, 29 and 30-33 are pending, previously presented claims 29 and 30 have been renumbered as claims 28 and 29 by the applicant and new claims 30-33 have been added by the applicant.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 25 and 26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more.
Regarding claims 25 and 26 under Step 2A claims 25 and 26 recite a judicial exception (abstract idea) that is not integrated into a practical application and does not provide significantly more.
Under Step 2A (prong 1), and taking claim 25 as representative, claim 25 recites a “ valuate resource state information based on a policy, wherein the resource state information includes at least one of: an attribute related to a source image to be encoded… (mental process of a person making an evaluation)
Determining… to change from a second encoder … to a first encoder … based on the evaluation (mental process of a person making a judgement/determination of what encoder to use);
These limitations, as drafted in such high level of generality, are processes that, under its broadest reasonable interpretation, covers performance of the limitations in the human mind with aid of a computer (see: MPEP 2106.04(a)(2), subsection III.c). For example, as mapped by the examiner above, the various limitations in the context of this claim encompasses a person performing the limitations with the aid of a generic computer. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls with the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Under Step 2A (prong 2) and Step 2B claim 25 does contain additional elements of:
a non-transitory tangible computer-readable medium comprising instructions when executed cause a processor of an electronic device to…
wherein the attribute is video content metadata;…
Electronic device [that is used to perform the evaluation and determination and is host to the first and second encoder],
wherein the first encoder is to encode a desktop display image sequence; and
communicating, by the electronic device, an encoded desktop display image sequence.
However, when evaluated either individually or in combination, they do not integrate the above-mentioned abstract idea into practical application nor do they amount to significantly more than the exception itself. In particular the additional element of “a non-transitory tangible computer-readable medium comprising instructions when executed cause a processor of an electronic device to; …wherein the attribute is video content metadata Electronic device [that is used to perform the evaluation and determination and is host to the first and second encoder]… wherein the first encoder is to encode a desktop display image sequence…” cited above are recited in high level of generality (i.e. as a generic computer in a generic computing environment that contains video content to be displayed) such that, either alone or in combination, it amounts to nothing more than generally linking the use of a judicial exception to a particular technological environment (MPEP 2106.05(h)). The additional elements of “communicating, by the electronic device, an encoded desktop display image sequence” are recited, in high level of generality (i.e. extra-solution activity of data output). A such, either alone or in combination, they amount to nothing more than generally linking the use of a judicial exception/abstract idea to a particular technological environment and insignificant extra solution activity and thus does not integrate the judicial exception/abstract idea into a practical application nor do they provide significantly more than the abstract idea itself - see MPEP 2106.05(g). Therefore, the judicial exception/abstract idea, identified above, is not integrated into a practical application nor does the claim include any additional elements that are sufficient to amount to significantly more than the judicial exception/abstract idea. As such, the claim is not patent eligible.
Claim 26 is not patent eligible because they recite more complexities descriptive of the abstract idea itself, and at least inherit the abstract idea of claim 25. As such, claim 26 is understood to recite an abstract idea under step 2A (prong 1) for at least similar reasons as discussed above. Claim 26 contains more examples of properties of a computer that can be considered to be resource state information. Under step 2A (prong 2) and 2B, the additional elements of claim 26 when considered both individually or as a whole, do not integrate the abstract idea into a practical application nor do they amount to significantly more than the abstract idea itself. Because the additional elements are recited in high level of generality (i.e. as additional generic functions that can be performed by a generic computer and additional computing properties that can be recorded) such that, either alone or in combination, it amounts to nothing more than generally linking the use of a judicial exception to a particular technological environment (MPEP 2106.05(h). As such, the claim is not patent eligible.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-9, 18, 20, 25-27, 29 and 30-33 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
The following claim language lacks antecedent basis:
Claim 1: line 11 “the GPU”.
The following claim languages are not clear and indefinite:
As per claims 1 and 25 it is not clear if the “source image” contains the “desktop display image sequence” or not.
As per claim 25 it is not clear if the “first” and “second” “encoders” are physical entities or are they different encoding algorithms.
It is not clear what the “encoded desktop display image sequence” comes from (e.g. it is an output of encoding of the “desktop display image sequence” by the “first encoder”, or the “second encoder”, or some previously stored “encoded desktop display image sequence” that is not processed by either encoders).
The dependent claims do not cure the 112(b) issues of their respective parent claims. Therefore, they are rejected for the same reasons as those presented for their respective parent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8, 9, 18, 20 and 25-33 are rejected under 103 as being unpatentable over Doucette et al (U.S. Pat. 9582272) in view of Ganesh et al (U.S. Pat. 12101475).
Doucette and Ganesh references have been previously presented.
As per claim 1 Doucette teaches the invention substantially as claimed including a method, comprising: evaluating, by an electronic device, resource state information based on a policy that determines encoder selection, wherein the resource state information includes an attribute related to a source image to be encoded (Figs. 2, 6, 9; col 11 lines 50-56, col 12 lines 1-26, col 10 line 60 – col 11 line 15; col 6 lines 55-58, col 7 lines 18-22 thresholds for various requirements or system changes or quality demands are used to determine whether to switch from codec/encoder of a VM to a hardware codec/encoder of a UI session processor; the thresholds are evaluated based on various resource state information such as: data bandwidth, bit rate, insufficient processing resources, increasing image complexity, increased loading of CPU, advertised availability of the UI session processor, change in remote computing attributes); encoding, by the electronic device with a second encoder executing on the host processor, first frames of a desktop display image sequence for the client device (col 11 lines 49-61, col 3 lines 39-51 software image processing codec of the VM processes display image data for a remote computer);
determining, by the electronic device, to change from the second encoder executing on the host processor to a first encoder included in the electronic device based on the evaluation, wherein the first encoder is a hardware encoder (col 3 lines 59-66, col 6 lines 48-54, col 7 lines 18-22 UI session processor is a physically distinct PCI-EXPRESS component that contains codecs that are used to process images, therefore it is a type of hardware encoder); encoding, by the electronic device with the first encoder, second frames of the desktop display image sequence for the client device (col 10 lines 38-44, col 12 lines 1-6, col 6 lines 55-58, col 7 lines 18-22 encoding of future desktop display images of a remote computer may be switched to the UI session processor); and communicating, by the electronic device to the client device over a communication network, the first frames of the desktop image sequence as encoded by the second encoder and the second frames of the desktop display image sequence as encoded by the first encoder, an encoded desktop display image sequence (col 11 lines 49-61 display image data that are processed by software image processing codec of the VM are communicated over the network to a remote computer; col 12 lines 39-47 display image data that are processed by UI session processor are also communicate over the network to the remote computer).
Doucette does not explicitly teach that the attribute related to a source image to be encoded can be video content metadata; and that the hardware encoder part of the GPU.
However, Ganesh teaches the attribute related to a source image to be encoded can be video content metadata (col 5 lines 1-24, col 12 lines 1-19; col 13 lines 13-28 determination to either use a hardware encoder or a software encoder is made based on analysis of quality/latency complexity, temporal level or hierarchy placement of the received picture); and that the hardware encoder part of the GPU (col 7 lines 58-60).
It would have been obvious to one with ordinary skill in the prior to the effective filling date of the invention to combine the teachings of Doucette and Ganesh because both are directed towards encoding of stream image content using one or more encoders. One with ordinary skill in the art would be motivated to incorporate the teachings of Ganesh into that of Doucette because Ganesh further improves efficiency of encoding of stream image content using one or more encoders (col 1 lines 1-38)
As per claim 2 Doucette teaches wherein evaluating the resource state information comprises evaluating a pixel change rate based on a pixel rate threshold (col 12 lines 10-18, 67: image complexity increase or decrease is a factor that would cause bandwidth threshold to be exceed).
As per claim 3 Doucette teaches wherein evaluating the resource state information comprises evaluating a network bit rate based on a bitrate threshold (col 12 lines 10-12 bandwidth thresholds can be exceed based on bitrate).
As per claim 4 Doucette teaches wherein evaluating the resource state information comprises evaluating a host processor utilization based on a host processor utilization threshold (col 12 lines 14-21 bandwidth thresholds can be exceed based on insufficient processing resources, increased loading of the CPU by other software).
As per claim 5 Doucette teaches wherein evaluating the resource state information comprises evaluating a GPU reservation state (col 12 lines 10-16, col 3 lines 19-24 processing resources, which can be CPU and GPU, can be determined to be insufficient).
As per claim 8 Doucette teaches wherein a client device is to blend second content associated with the second encoder and first content associated with the first encoder during a transition stage (col 17 line 59 – col 18 line 6: image encoder 920 or 930 are engaged on a per-frame or per-section basis and transmitted to remote computer, this means a first frame or section could be processed by a first encoder and a following second frame or section could be processed by a second encoder, both frames or sections are sent to the same remote computer and combined at the remote computer).
As per claim 9 Doucette teaches wherein blending comprises temporal dithering or a spatial combination of the first content and the second content (col 17 line 59 – col 18 line 6 frames or sections that are to be combined at remote computer are separated temporally; and video window within an image frame is forwarded to remote computer without re-encoding, this means that the video window is spatially combined with rest of images in the frame).
As per claim 18 Doucette teaches wherein evaluating, by the electronic device, the resource state information based on the policy includes evaluating an attribute of the desktop display image sequence (col 17 line 62 - col 18 line 2).
As per claim 20 Doucette teaches wherein evaluating, by the electronic device, the resource state information based on the policy includes evaluating at least one of a processor metric of a processor included in the electronic device or state information of a GPU included in the electronic device (col 12 lines 10-16, col 3 lines 19-24 processing resources, which can be CPU and GPU, can be determined to be insufficient).
As per claim 25, it is a much broader reworded product version of method claim 1. Therefore, it is rejected for the same reasons, mutatis mutandis, as those presented for claim 1.
As per claim 26 Doucette teaches wherein the resource state information includes an attribute of the desktop display image sequence, the GPU state information for a GPU of the electronic device, and the client state information for the client device (col 12 lines 1-26, col 3 lines 19-24; col 17 line 62 - col 18 line 2).
As per claim 27 Doucette teaches wherein determining, by the electronic device, to change from the second encoder included in the electronic device to the first encoder included in the electronic device includes determining, by the electronic device, that the resource state information satisfies a threshold of the policy (col 12 lines 6-15, col 11 lines 3-5).
As per claim 28 Doucette does not explicitly teach wherein evaluating, by the electronic device, the resource state information based on the policy includes evaluating, by the electronic device, the resource state information based on a session policy table storing a threshold parameter that controls mode transition between live image encoding and video image encoding on a per-display basis.
However, Ganesh teaches wherein evaluating, by the electronic device, the resource state information based on the policy includes evaluating, by the electronic device, the resource state information based on a session policy table storing a threshold parameter that controls mode transition between live image encoding and video image encoding on a per-display basis (col 9 lines 36-67, col 10 lines 1-15, 30-54, col 11 lines 9-40 various policies in the form of various target requirements, such as target video quality level, target operation latency, are stored in multiple tables; these requirements are a form of thresholds that are used to decide if hardware or software or hybrid encoders are to be used to encode different types of content, such as live streaming content… video on demand content, in clip, group of pictures, picture, block or pixel level ).
It would have been obvious to one with ordinary skill in the prior to the effective filling date of the invention to combine the teachings of Doucette and Ganesh because both are directed towards encoding of stream image content using one or more encoders. One with ordinary skill in the art would be motivated to incorporate the teachings of Ganesh into that of Doucette because Ganesh further improves efficiency of encoding of stream image content using one or more encoders (col 1 lines 1-38).
As per claim 29 Ganesh teaches wherein evaluating, by the electronic device, the resource state information based on the policy includes evaluating, by the electronic device, the resource state information based on a session policy table storing a threshold parameter that controls mode transition between the first encoder and the second encoder (col 9 lines 36-67, col 10 lines 1-15, 30-54, col 11 lines 9-40).
As per claim 30 Ganesh teaches wherein the video content metadata includes at least one of a frame rate, chroma format, or dynamic range format (col 5 lines 1-24 quality/latency of video contents to be encoded determines how fast frames of the video contents can be generated, which determines a frame rate).
As per claim 31 Doucette as modified by Ganesh teaches wherein the communicating, by the electronic device, of the first frames and the second frames occurs during a transition mode for transitioning from the second encoder to the first encoder, and wherein the method further (Doucette col 11 lines 49-61, col 12 lines 1-45 after some first frames, processed by software image processing codec of the VM, are communicated to a remote computer, a switch can be made to transition processing of second set of frames to a UI session processor, which will communicate the second set of frames to the remote computer) comprises: priming the first encoder, based on state information of the second encoder, in the transition mode, wherein the priming occurs before the encoding, by the electronic device with the first encoder, of the second frames of the desktop display image sequence for the client device (Ganesh col 15 lines 33-67, col 16 lines 41-67, col 17 lines 1-44 software encoder is used to perform some previous stages of encoding in order to prepare images for further operations of later stages of encoding by hardware encoders).
As per claim 32 Doucette as modified by Ganesh teaches wherein the first frames and the second frames of the desktop display image sequence are part of a quantity of frames for encoding during a transition period (Doucette col 11 lines 49-61, col 12 lines 1-45 after some first frames, processed by software image processing codec of the VM, are communicated to a remote computer, a switch can be made to transition processing of second set of frames to a UI session processor, which will communicate the second set of frames to the remote computer),and wherein the encoding the first frames with the second encoder and the encoding of the second frames with the first encoder comprises the first encoder and the second encoder both being engaged concurrently to perform encoding of the quantity of frames when transitioning from the first encoder to the second encoder (Doucette col 17 lines 32-37, Fig. 9: different portions of image frames [are] to be processed by either software encoder of a VM or hardware encoder of the UI session processor; Ganesh col 15 lines 33-67, col 16 lines 41-67, col 17 lines 1-44 software encoder is used to perform some previous stages of encoding in order to prepare images for further operations of later stages of encoding by hardware encoders; this means that at any point of encoding of a set of image blocks, both the software encoder and the hardware encoder could be active at the same time, each performing different stages of encoding).
As per claim 33 Doucette teaches further comprising: signaling, by the electronic device, a mode control service installed at the client device to apply blending between the decoded first frames and the decoded second frames (col 3 lines 50-58, col 5 lines 13-19, 46-58; col 17 lines 32-37, Fig. 9 image decoder of remote computer are responsible to decode compressed display image data received, which can contain portions that are processed by software encoder of a VM or hardware encoder of a UI session processor, from the host computer; this means that the image decoder is obviously instructed on how to combined received display image data).
Claims 6 and 7 are rejected under 103 as being unpatentable over Doucette et al (U.S. Pat. 9582272) in view of Ganesh et al (U.S. Pat. 12101475) and in further view of Flodman et al (WO2012/154157).
Doucette, Ganesh and Flodman references have been previously presented.
As per claim 6 Doucette does not explicitly teach wherein evaluating the resource state information comprises evaluating a client processor utilization based on a client processor utilization threshold.
However, Flodman teaches wherein evaluating the resource state information comprises evaluating a client processor utilization based on a client processor utilization threshold ([0043], [0045]-[0050] resource utilization, of receiving stations, during decoding of received frames is evaluated in order to request a new encoding scheme to be used by transmitting station).
It would have been obvious to one with ordinary skill in the prior to the effective filling date of the invention to combine the teachings of Doucette and Flodman because both are directed towards remote distribution of encoded video contents. One with ordinary skill in the art would be motivated to incorporate the teachings of Flodman into that of Doucette because Flodman further improves efficiency of remote distribution of encoded video contents by taking resource utilization of receivers of video contents into consideration ([0002], [0005]).
As per claim 7 Flodman teaches wherein evaluating the resource state information comprises evaluating a client memory bandwidth utilization based on a client memory bandwidth utilization threshold ([0006], [0045], [0046] memory utilization is compared to a corresponding threshold ).
Response to Arguments
Applicant’s arguments, for previously presented U.S.C. 103 issues, with respect to all pending claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
However applicants arguments for previously presented U.S.C. 101 issues, with respect to currently amended claim 1 do not apply to currently amended claims 25 and 26, since the arguments are primarily based on amendments of claim 1 that are not present in currently amended claims 25 and 26. As such, U.S.C. 101 issues remains for claims 25 and 26 (please see details in U.S.C. 101 section above).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BING ZHAO whose telephone number is (571)270-1745. The examiner can normally be reached on 9am - 5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached on (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BING ZHAO/Primary Examiner, Art Unit 2151