DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 06/23/2023 and 09/03/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hildebrand et al. (US 2013/0054808 A1).
Regarding claim 1, Hildebrand teaches an information processing method performed by a first communication device, comprising:
sending a first query request ([0049] The resource manager (870) functions in response to receipt of a workload by the management server (820) from the client machine (810).; wherein the client machine is the first communication device);
wherein the first query request comprises at least one of the following:
computing power requirement information of a computing power task ([0006] receipt of a workload requiring optimization and based upon both the workload requirements; [0049] Workload requirements include, but are not limited to speed in the form of I/O per second and bandwidth.); and
computing power requirement information of a service ([0036] Examples of workloads and functions which may be provided from this layer includes, but is not limited to: mapping and navigation; software development and lifecycle management; virtual classroom education delivery; data analytics processing; operation processing; and management and performance associated with hybrid workloads within the cloud computing environment.; [0049] Workload requirements include, but are not limited to speed in the form of I/O per second and bandwidth).
Regarding claim 2, Hildebrand teaches an information processing method performed by a second communication device, comprising:
obtaining first information, wherein the first information comprises at least one of the following:
a first query request ([0049] The resource manager (870) functions in response to receipt of a workload by the management server (820) from the client machine (810). Wherein the workload management server 820 is the second communication device);
computing power status information of a server ([0049] Workload requirements include, but are not limited to speed in the form of I/O per second and bandwidth.); and
performing a first operation according to the first information ([0049] The resource manager (870) functions in response to receipt of a workload by the management server (820) from the client machine (810).);
wherein the first operation comprises at least one of the following:
querying a first server according to the first information ([0049] The resource manager (870) functions in response to receipt of a workload by the management server (820) from the client machine (810). More specifically, the resource manager (870) tracks resource utilization across storage servers (830), (840), and (850) to support both serial and parallel workloads.);
determining a second query request (Fig. 5, Step 504 Send request to each server for load information; wherein the request for load information is the second query, the first query is the received workload);
querying the first server according to the second query request (Fig. 5, Step 504 Send request to each server for load information; [0042] the load information is solicited from each individual storage server, and each storage server response to the request with individual load information (506). In one embodiment, the load information may include, but is not limited characteristics associated with the CPU, network, storage network, number of mounted client, etc.; [0049] The resource manager (870) functions in response to receipt of a workload by the management server (820) from the client machine (810).); and
sending the second query request to complete at least one of the following:
determining a computing power resource allocation request ([0042] In one embodiment, the load information may include, but is not limited characteristics associated with the CPU, network, storage network, number of mounted client, etc.); and
sending the computing power resource allocation request ([0043] As shown, the server receives a layout request from a client workstation (602). In response to the request, load information is ascertained from the stored load results for each server (604). The stored results together with at least one of the optimization algorithms described in either the first or second aspect are employed to calculate a set of data servers to which I/O request can be proportioned in parallel (606).);
wherein the first query request comprises at least one of the following:
computing power requirement information of a computing power task ([0006] receipt of a workload requiring optimization and based upon both the workload requirements; [0049] Workload requirements include, but are not limited to speed in the form of I/O per second and bandwidth.); and
computing power requirement information of a service ([0036] Examples of workloads and functions which may be provided from this layer includes, but is not limited to: mapping and navigation; software development and lifecycle management; virtual classroom education delivery; data analytics processing; operation processing; and management and performance associated with hybrid workloads within the cloud computing environment.; [0049] Workload requirements include, but are not limited to speed in the form of I/O per second and bandwidth).
Regarding claim 3, Hildebrand teaches wherein the second query request comprises at least one of the following:
computing power requirement information of a computing power task ([0049] Workload requirements include, but are not limited to speed in the form of I/O per second and bandwidth) and/or computing power requirement information of a service;
location information of a first communication device;
network-selected user plane information; and
data network access identifier DNAI information.
Regarding claim 4, Hildebrand teaches wherein the computing power status information comprises at least one of the following:
a computing power remaining status or a computing power available status ([0038] The variable Ar.sub.i is a metric employed to represent available resources, such as bandwidth, for storage server i. In addition, a metric C(Ar.sub.i, r) is employed to represent all possible sets of storage servers, n, and their available resources. The variable r represents the quantity of storage servers. [0042] More specifically, the load information is solicited from each individual storage server, and each storage server response to the request with individual load information (506). In one embodiment, the load information may include, but is not limited characteristics associated with the CPU, network, storage network, number of mounted client, etc.; [0049] In one embodiment, the resource manager (870) validates resource availability in a continuous manner and allocates and re-allocates resources based on the validation. Resources include, but are not limited to, available network bandwidth, available storage bandwidth, quantity of current connection, and processing unit resources. In one embodiment, the resources may be expanded to include additional elements.); total computing power;
a computing power use status ([0041]);
a predicted future computing power use status ([0041] If workloads are already deployed in the cluster of storage server nodes, the algorithm accounts for the predicted load of the existing workload. More specifically, the current workload is represented by the array U, which represents the predicted load on storage server i based on one or more existing workloads); and
a computing power use status in a past first period of time.
Regarding claim 5, Hildebrand teaches wherein the first operation further comprises: obtaining the first server, and sending index information of the first server ([0043] As shown, the server receives a layout request from a client workstation (602). In response to the request, load information is ascertained from the stored load results for each server (604). The stored results together with at least one of the optimization algorithms described in either the first or second aspect are employed to calculate a set of data servers to which I/O request can be proportioned in parallel (606). More specifically, the combination of data at step (606) facilitates determining how to apportion the I/O request, e.g. layout, so as to proportionally distribute the associated load. The layout is then returned to the requesting client workstation (608). Accordingly, the layout generated herein pertains to distributing parallel workloads across one or more data servers in a proportional manner.; [0044] As indicated above, the workload to be serviced may include a hybrid workload entailing both a parallel workload aspect and a serial workload aspect.).
Regarding claim 6, Hildebrand teaches wherein the first server satisfies at least one of the following:
the first server satisfies the first query request or the second query request ([0005]; [0037]; [0038]; [0043] Accordingly, the layout generated herein pertains to distributing parallel workloads across one or more data servers in a proportional manner.);
the first server satisfies the computing power resource allocation request;
a physical distance between the first server and the first communication device is the shortest;
a routing distance or a delay between the first server and the first communication device is the shortest; and/or
a candidate server satisfies the first query request or the second query request ([0005]; [0037]; [0038]; [0043]).
Regarding claim 7, Hildebrand teaches wherein the computing power resource allocation request comprises at least one of the following:
index information of the candidate server;
a computing power resource allocation request identifier ID;
a computing power resource status occupied in the request;
computing power task completion time and/or service completion time;
computing power task start time and/or service start time; and
computing power task description information and/or service description information ([0049]).
Regarding claim 8, it is a system type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above. Further the additional limitations “A first communication device, comprising: a memory, configured to store a program or an instruction; and a processor, wherein the program or the instruction, when executed by the processor, causes the first communication device to perform the information processing method according to claim 1.” Are taught by Hildebrand in [0059] “Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.”
Regarding claim 9, it is a system type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above. Further, the additional limitations “a memory, configured to store a program or an instruction, and a processor, wherein the program or the instruction, when executed by the processor, causes the second communication device to” are taught by Hildebrand in [0059]
Regarding claim 10, it is a system type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above.
Regarding claim 11, it is a system type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Regarding claim 12, it is a system type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above.
Regarding claim 13, it is a system type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above.
Regarding claim 14, it is a system type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale above.
Regarding claim 15, it is a media/product type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above.
Regarding claim 16, it is a media/product type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above.
Regarding claim 17, it is a media/product type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above.
Regarding claim 18, it is a media/product type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Regarding claim 19, it is a media/product type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above.
Regarding claim 20, it is a media/product type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195