DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
2. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Karunakaran et al. (Pub. No. US2010/0214918)
As per claim 1, Karunakaran discloses a method for accelerating computing applications with bus compatible modules (fig.1, IO modules 202s), comprising:
by operation of a first module (fig.2, DP 204a) that is bus (fig.2, backplane fabric or interconnection bus 210) compatible with a server system (fig.1, server 110a), receiving network packets that include data for processing (paragraph 40, receives packets (either reverse or forward flow) to send them to the other DP 204.), the data being a portion of a larger data set processed by an application (fig.2, application processor 206);
by operation of evaluation circuits of the first module (paragraph 33, lines 4-5, TCP state tracking/monitoring, forwarding of processed packets to IO module(s) (routing)), evaluate header information of the network packets to map network packets to any of a plurality of destinations (fig.2, target DP 204) on the first module, each destination corresponding to at least one of a plurality of offload processors (fig. 2, data processors 204) of the first module(paragraph 38, lines 3-5, the algorithm identifies or "predicts" which target DP 204 would be selected by the IO module 202 when the first packet in the reverse traffic is received at the IO module (e.g., server side TO module));
by operation of the offload processors (fig. 2, data processors 204) of the first module, executing a programmed operation of the application in parallel on multiple offload processors to generate first processed application data (paragraph 31, lines 5-6, application programs executable by the APs 206. Other applications may be utilized.); and
by operation of input/output (I/O) circuits (fig.2, IO module 202s), transport the first processed application data out of the first module (paragraph 46, lines 2-5, An IO module 202a receives the packet (step 402), performs the DP load balancing algorithm (as previously described) at the IO module 202a and selects one of the DPs 204 as the target DP for processing the forward flow/session (step 404)).
As per claim 11, Karunakaran discloses a system, comprising:
a first module (fig.2, DP 204a), comprising:
a connection that is bus fig.2, backplane fabric or interconnection bus 210) compatible with a server (fig.1, server 110a) system having a host processor (fig.2, global flow manager);
input/output (I/O) circuits (fig.2, IO module 202s) configured to receive network packets that include data for processing (paragraph 40, receives packets (either reverse or forward flow) to send them to the other DP 204.), the data being a portion of a larger data set processed by an application (fig.2, application processor 206), and transport first processed application data out of the first module;
evaluation circuits (paragraph 33, lines 4-5, TCP state tracking/monitoring, forwarding of processed packets to IO module(s) (routing)) configured to evaluate header information of the network packets to map network packets to any of a plurality of destinations on the first module (paragraph 25, lines 9-10, each entry. identifies one of the n DPs to handle the processing. As a result, a target DP is determined/selected to process the new flow/session), each destination corresponding to at least one of a plurality of offload processors (fig. 2, data processors 204) of the first module; and
the plurality of offload processors configured to execute a programmed operation of the application in parallel on multiple offload processors to generate the first processed application data. (paragraph 46, lines 2-5, An IO module 202a receives the packet (step 402), performs the DP load balancing algorithm (as previously described) at the IO module 202a and selects one of the DPs 204 as the target DP for processing the forward flow/session (step 404)).
As per claims 2 and 12, Karunakaran discloses wherein: the server system includes a host processor (fig.2, global flow manager); and the receiving, evaluation and processing of the network packets and transport of first processed application packets are performed independent of the host processor. (paragraph 26, each IO module 202 individually and independently performs the same load-balancing method (as the other IO modules) when a new flow is received.)
As per claims 3 and 13, Karunakaran discloses wherein the transport of first processed application data comprises the writing of the processed data to a storage medium. (paragraph 54, lines 6-8, selected data processor optionally applies another load balancing algorithm to select one of a plurality of application processors that executes one or more applications in relation to the flow.)
As per claims 4 and 14, Karunakaran discloses wherein the transport of first processed application data comprises out-going network packets with destination corresponding to a storage medium on another server system. (paragraph 54, lines 8-10, When the destination device for the flow is a server having a particular function and a plurality of such servers are available, the application server selects a server as the destination device.)
As per claims 5 and 15, Karunakaran discloses wherein the transport of first processed application data comprises out-going network packets with destination corresponding a second module on a different server system. (paragraph 25, lines 3-5, processor can be a group or cluster of processors such as a group of physical processors operatively coupled to a shared clock or synchronization signal, a shared memory, a shared memory bus, and/or a shared data bus)
As per claims 6 and 16, Karunakaran discloses wherein the transport of first processed application data comprises out-going network packets with destination corresponding a processor on a different server system. (paragraph 54, lines 8-10, When the destination device for the flow is a server having a particular function and a plurality of such servers are available, the application server selects a server as the destination device)
As per claim 7, Karunakaran discloses wherein the programmed operation of the application is an intermediate operation of a sequence of operations of the application. (paragraph 54, lines 6-8, selected data processor optionally applies another load balancing algorithm to select one of a plurality of application processors that executes one or more applications in relation to the flow.)
As per claims 8 and 18, Karunakaran discloses wherein: the application is a map-reduce application; and the programmed operation is a record reader operation. (paragraph 55, lines 2-5, the application processor performs the same load balancing application (as the IO modules) to predict the data processor that would be selected by the IO module(s) when the first packet of the reverse traffic is received.)
As per claims 9 and 19, Karunakaran discloses wherein: the application is a map-reduce application; and the programmed operation is a map operation. (paragraph 40, lines 1-2, selected data processor optionally applies another load balancing algorithm to select one of a plurality of application processors that executes one or more applications in relation to the flow.)
As per claims 10 and 20, Karunakaran discloses the method further including, by operation of the I/O circuits, transmit network packets identifying the first processed application data to other modules. (paragraph 46, lines 2-5, An IO module 202a receives the packet (step 402), performs the DP load balancing algorithm (as previously described) at the IO module 202a and selects one of the DPs 204 as the target DP for processing the forward flow/session (step 404)).
3. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Chowdury et al. [Pub. No. US20110075557] discloses a processor that is configured to inspect a received control plane packet and obtain information from the received control plane packet that is used to determine offload eligibility for traffic corresponding to the received control plane packet.
Conclusion
4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM T HUYNH whose telephone number is (571)272-3635 or via e-mail addressed to [kim.huynh3@uspto.gov]. The examiner can normally be reached on M-F 7.00AM- 4:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tsai Henry can be reached at (571)272-4176 or via e-mail addressed to [Henry.Tsai@USPTO.GOV].
The fax phone numbers for the organization where this application or proceeding is assigned are (571)273-8300 for regular communications and After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (571)272-2100.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K. T. H./
Examiner, Art Unit 2184
/HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184