Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are currently pending for examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, 9, 15, 16, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1).
As per claim 1, Ren discloses:
An apparatus comprising: at least one processing device comprising a processor coupled to a memory; the at least one processing device being configured: to maintain an execution queue data structure, the execution queue data structure comprising a plurality of tasks to be executed, each of the plurality of tasks comprising execution of server-side code for one or more application services hosted by one or more servers in an information technology infrastructure environment (“Each of the devices described herein can include one or more processors as described above; Some embodiments described herein relate to devices with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium or memory) having instructions or computer code thereon for performing various computer-implemented operations.”, col.19, lines 52-59 ; "Accordingly, depending on a type of data transformation task in queue for execution at mainframe 101 and the mainframe predicted workload value for a future time window, mainframe 101 can determine whether to locally execute a data transformation task or send or offload the data transformation task to another compute device for its execution.", col.18, lines 10-18 ; "Mainframe 101 can receive performance/readiness data 105 sampled during execution of a data transformation task, data indicating server resources capacity, readiness data, and/or identifier of a data transformation task that being executed by server 103A, 103B, and/or 103C, and identifier of a data transformation task in queue to be executed by server 103A, 103B, and/or 103C", col.3, lines 37-45 ; Examiner Note: the queue for execution equates to an execution queue, and the execution of data transformation tasks by the server equates to the execution of server-side code for one or more application services. The mainframe and server environment equates to an information technology infrastructure environment)
Ren discloses the above limitation of claim 1, but does not disclose the offloading of tasks to a client device specifically.
However, Amer discloses:
determine at least one of hardware and software requirements for the plurality of tasks in the execution queue data structure; ("For example, application of a film grain noise operation to a rendered video frame typically increases the high-frequency content of the resulting modified video frame, and thus requires additional encoding compute effort and an increase in the resulting amount of encoded data needed to represent the modified video frame in the encoded stream 110. Both the increased encoding compute effort and increased encoding data output can lead to increased power consumption by the server 102", 0014; Examiner Note: awareness of different levels of hardware processing demand associated with different types of tasks equates to determining hardware requirements for a plurality of tasks)
to determine at least one of hardware and software resources available on a set of client devices in the information technology infrastructure environment; and…("In instances in which the client device 104 advertised its capabilities without actively negotiating an offload strategy, the offload control module 218 utilizes the capabilities indicated in the status advertisement 114 from the client device 104, and in some embodiments, one or both of the current network status information 228 and current resource status information 230 to determine the particular one or more modifications to one or more graphics effects operations to be implemented. Generally, when the client device 104 has a greater amount of graphics processing resources available, the server 102 is able to offload a greater degree of graphics effects processing to the client device 104, and vice versa.”, 0029 ; Examiner Note: available processing resources equate to available hardware resources.)
to offload execution of at least a subset of the plurality of tasks in the execution queue data structure from the one or more servers in the information technology infrastructure environment to at least one of the set of client devices in the information technology infrastructure environment based at least in part on mapping the determined available hardware and software resources of the set of client devices in the information technology infrastructure environment with the determined hardware and software requirements for the plurality of tasks in the execution queue data structure. ("Thus, the status advertisement 114 can include an indication of whether the client device 104 has a GPU or other graphics-capable processor, the type/model of GPU and/or its current operating parameters (e.g., clock speed, cache size, etc.), an indication of whether graphics memory is available and the amount available, an indication of the types of graphics effects the GPU and its software can support and operational characteristics related thereto, and the like. In this approach, the client device 104 makes its capabilities know to the server 102 and allows the server 102 to decide the offloading strategy...For example, after considering the capabilities of the client device 104 and its own current operational status, the server 102 proposes to offload all image distortion operations (e.g., chromatic aberration effect operations) to the client device 104. The client device 104 evaluates this proposal and then may decline this offloading proposal in view of the resource-intensive processing required by the image distortion operations and user settings indicating that image distortion is disfavored. The server 102 then responds by proposing a different offloading approach, such as by offloading at least part of the burden of a different graphics effects operation. Alternatively, the client device 104 instead proposes that some initial image distortion processing be performed by the server 102, and final image distortion processing then be offloaded to the client device 104. The server 102 then in turn accepts this counterproposal or determines a different counterproposal of its own, and continue this negotiation process until a mutually-agreeable offloading strategy is identified. ", 0027)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren with those of Amer in order to reduce processing effort at the server and reduce the potential for excessive bandwidth consumption or latency in transmitting the results of the tasks (Amer, [0017]).
As per claim 7, Ren in view of Amer fully discloses the limitations of claim 1.
Furthermore, Ren discloses:
a given one of the plurality of tasks is added to the execution queue data structure based at least in part on a request received by at least one of the one or more servers, and wherein execution of the given task is offloaded to at least one of the set of client devices in the information technology infrastructure environment. (" Server 103 can receive, via network communication interface 405 and API 308B requests to execute data transformation tasks, and/or other suitable data sent by, for example, mainframes 101A, 101B, and 101C shown in FIG. 1", col.8, lines 13-19)
As per claim 9, Ren in view of Amer fully discloses the limitations of claim 1.
Furthermore, Ren discloses:
offloading execution of a given one of the plurality of tasks in the execution queue data structure from the one or more servers in the information technology infrastructure environment to at least a given one of the set of client devices in the information technology infrastructure environment is further based at least in part on results of execution of one or more historical tasks by the given client device in the information technology infrastructure environment. (“A first example of an implementation of a load balance model, such as load balance model 305 is discussed with reference to FIG. 5. Load balance model 305 can be implemented as a rule-based system including knowledge base 501, working memory 503, and inference engine 505. As discussed above, in some implementations, load balance model 305 can output decision 507 indicating whether a data transformation task should be executed locally at mainframe 101 (shown in FIG. 1), or sent to be executed by another compute device”, col.12, lines 14-23 ; “ "For another example, rule antecedents can include: (1) amount of memory unused by the compute devices in a mainframe environment, (2) processor characteristics of compute devices in a mainframe environment including clock speed, host-bus speed, processor cache memory size, and/or other processor-specific characteristics, and (3) historical workload patterns of compute devices included in a mainframe environment and other suitable characteristics or patterns of such compute devices.", col.12, lines 52-60); “ Knowledge base 501 includes a set of rules implemented from information regarding, for example, computation power of mainframe 101, servers 103A, 103B, 103C, system integrated information processors or other suitable compute device coupled to mainframe 101. Rules in knowledge base 501 can follow an IF (ANTECEDENT) THEN (CONSEQUENT) pattern. A rule can have multiple antecedents and multiple consequents.”, col.12, 27-33)
As per claim 15, it is a computer program product (Ren, [col.19, lines 52-59]) claim with substantially the same limitations as claim 1. Accordingly, it is rejected for substantially the same reasons.
As per claim 16, it is a computer program product claim with substantially the same limitations as claim 9. Accordingly, it is rejected for substantially the same reasons.
As per claim 18, it is a method claim with substantially the same limitations as claim 1. Accordingly, it is rejected for substantially the same reasons.
As per claim 19, it is a method claim with substantially the same limitations as claim 9. Accordingly, it is rejected for substantially the same reasons.
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Krishnamoorthy (US 20230049596 A1) in further view of Sampathkumar (US 20250113252 A1).
As per claim 2, Ren in view of Amer fully discloses the limitations of claim 1, but does not disclose determining the software or hardware requirements of the task using an NLP algorithm.
However, Krishnamoorthy discloses:
determining the hardware and software requirements for a given one of the plurality of tasks in the execution queue data structure comprises performing natural language processing (NLP) of the server-side code of the given task utilizing one or more deep learning algorithms. ("The code requirements 146 may include required features, properties, and/or abilities of the entries of candidate code 148 that will be presented for user selection. For example, the code requirements 146 may include requirement that candidate code 148 use the programming language indicated in language feature 130, are the same type as indicated by the type feature 132, and/or perform at least the same computing tasks as those indicated by the task feature 134. In cases where a request 140 is provided, metadata mapping 142 may use natural language processing (NLP) 144 (e.g., a natural language processing algorithm) to determine the code requirements 146", 0029)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Krishnamoorthy in order to provide the system with the capability to decipher natural language requests on a platform that motivates developers to provide and maintain high quality code for reuse by others (Krishnamoorthy, [0004]).
Krishnamoorthy fully discloses the limitations of claim 2, but does not disclose utilizing deep learning algorithms. However, Sampathkumar discloses: "In some embodiments, engine 200 may leverage a large language model (LLM), whether known or to be known. An LLM is a type of AI system designed to understand and generate human-like text based on the input it receives. The LLM can implement technology that involves deep learning, training data and natural language processing (NLP). Large language models are built using deep learning techniques, specifically using a type of neural network called a transformer.", 0062
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer in further view of Krishnamoorthy with those of Sampathkumar in order to leverage the advantages of LLMs in a system which improves network availability and usage via connected devices (Sampathkumar, [0032]).
As per claim 3, Ren in view of Amer in further view of Krishnamoorthy in further view of Sampathkumar fully discloses the limitations of claim 2.
Furthermore, Sampathkumar discloses:
the one or more deep learning algorithms comprise one or more large language models (LLMs) ("In some embodiments, engine 200 may leverage a large language model (LLM), whether known or to be known. An LLM is a type of AI system designed to understand and generate human-like text based on the input it receives. The LLM can implement technology that involves deep learning, training data and natural language processing (NLP). Large language models are built using deep learning techniques, specifically using a type of neural network called a transformer.", 0062)
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Diao (US 9177269 B2).
As per claim 4, Ren in view of Amer fully discloses the limitations of claim 1, but does not disclose deriving time complexity of the server-side code of a given task.
However, Diao discloses:
determining the hardware and software requirements for a given one of the tasks comprises deriving one or more metrics for the server-side code of the given task, the one or more metrics comprising at least one of a time complexity of the server-side code of the given task, a space complexity of the server-side code of the given task, and a cyclomatic complexity of the server-side code of the given task. ("In one aspect of the invention, an exemplary method for reducing complexity of at least one user task includes steps of calculating a complexity metric for the at least one user task", col.1, 26-28 ; Examiner Note: a complexity metric equates to a time complexity of the server-side code of the given task)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Diao in order to provide the ability to determine task complexity within a system that improves speed of execution and transparency (Diao, [col.28, lines 53-59]).
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Sukonik (US 20150254191 A1).
As per claim 5, Ren in view of Amer fully discloses the limitations of claim 1, but does not disclose a request received from a first one of the set of client devices in the information technology infrastructure environment.
However, Sukonik discloses:
a given one of the plurality of tasks is added to the execution queue data structure based at least in part on a request received from a first one of the set of client devices in the information technology infrastructure environment, and wherein execution of the given task is offloaded to a second one of the set of client devices in the information technology infrastructure environment. ("According to the present embodiment there is provided a server for serving requests received as events from a client via a network, each event including a respective task, each task requiring access to disk storage, the server including: (a) at least one processor for processing each task in a run-to-completion manner; and (b) a plurality of hardware engines to which each at least one processor offloads at least a portion of the processing of at least one respective the task.", 0085)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Sukonik in order to provide the system with the ability to accept requests from the client devices and thus a method of improving transaction throughput (Sukonik, [abstract]).
The combination of Amer in view of Sukonik would provide a system capable of accepting a request from a first client device which is in the same environment as the second client device to which a subset of tasks are offloaded. See (Amer, [0027]).
As per claim 6, Ren in view of Amer fully discloses the limitations of claim 1, but does not disclose a request received from a first one of the set of client devices external to the information technology infrastructure environment.
However, Sukonik discloses:
a given one of the plurality of tasks is added to the execution queue data structure based at least in part on a request received from a client device external to the information technology infrastructure environment, and wherein execution of the given task is offloaded to a given one of the set of client devices internal to the information technology infrastructure environment. ("In another optional embodiment, the transactions come from sources internal or external to the server. In another optional embodiment, the transactions are events.", 0073 ; “According to the present embodiment there is provided a server for serving requests received as events”, 0085)
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Thornton (US 20020078130 A1).
As per claim 8, Ren in view of Amer fully discloses the limitations of claim 7, but does not disclose a task being a batch processing job.
However, Thornton discloses:
the given task comprises a batch processing job ("An extracting part 120 extracts the individual tasks which comprise the batch job and queues these tasks in a queue 122 using any execution parameters required to process the batch job. ", 0031)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Thornton in order to provide the system with the ability to handle batch processing jobs in a manner which advantageously saves time (Thornton, [0041]).
Claims 10, 12, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Barsness (US 20100241884 A1).
As per claim 10, Ren in view of Amer fully discloses the limitations of claim 9, but does not disclose the historical tasks comprising previous instances of execution of the same server-side code as the given task.
However, Barsness discloses:
the one or more historical tasks comprise previous instances of execution of the same server-side code as the given task. ("the throttling module 58 on the node 32 may determine the node completion time with reference to historical data about previous tasks similar to the current task, historical data about the time required to access a resource or resources to complete the task, the amount of data in the task, and/or whether the node 32 is currently configured with another task", 0057)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Barsness in order to provide the system with insight into historical instances of a task and an allocation method which saves time (Barsness, [0057]).
As per claim 12, Ren in view of Amer fully discloses the limitations of claim 9, but does not disclose the results of execution of the one or more historical tasks by the given client device in the information technology infrastructure environment characterizing the effectiveness of the given client device in executing the one or more historical tasks.
However, Barsness discloses:
the results of execution of the one or more historical tasks by the given client device in the information technology infrastructure environment characterize effectiveness of the given client device in executing the one or more historical tasks, the effectiveness being determined based at least in part on at least one of speed of execution of the one or more historical tasks and whether any errors were encountered during execution of the one or more historical tasks. ("the throttling module 58 on the node 32 may determine the node completion time with reference to historical data about previous tasks similar to the current task, historical data about the time required to access a resource or resources to complete the task, the amount of data in the task, and/or whether the node 32 is currently configured with another task", 0057)
As per claim 17, it is a computer program product (Ren, [col.19, lines 52-59]) claim with substantially the same limitations as claim 12. Accordingly, it is rejected for substantially the same reasons.
As per claim 20, it is a method claim with substantially the same limitations as claim 12. Accordingly, it is rejected for substantially the same reasons.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Randhi (US 20240345890 A1).
As per claim 11, Ren in view of Amer fully discloses the limitations of claim 9, but does not disclose the one or more historical tasks comprising one or more test tasks having code types exhibiting at least a threshold level of similarity to the server-side code of the given task.
However, Randhi discloses:
the one or more historical tasks comprise one or more test tasks having code types exhibiting at least a threshold level of similarity to the server-side code of the given task ("In one or more examples, the container configuration generation model may be configured to identify, after failing to identify an exact match between the current workload information and a portion of the historical workload information, that a similarity score between the current workload information and a portion of the historical workload information exceeds a predetermined similarity threshold, and where producing the container configuration output may include, based on identifying that the similarity score between the current workload information and the portion of the historical workload information exceeds the predetermined similarity threshold, selecting a historical container configuration corresponding to the portion of the historical workload information as the container configuration output.", 0004 ; "In some instances, to do so, the quantum workload management platform 103 may generate a similarity score between the file types, file sizes, load conditions, bandwidth conditions, and/or other information, both for current processing conditions/requests (e.g., current workload information) and historical conditions/requests (e.g., historical workload information).", 0035)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Randhi in order to provide the system with the ability to reference historical tasks within a system which improves application performance (Randhi, [0026]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Steinberger (US 20230123634 A1).
As per claim 13, Ren in view of Amer fully discloses the limitations of claim 1, but does not disclose generating two or more specialized execution queue data structures.
However, Steinberger discloses:
maintaining the execution queue data structure comprises: generating two or more specialized execution queue data structures each associated with at least one designated type of hardware or software resources; and placing different subsets of the plurality of tasks into each of the two or more specialized execution queue data structures based at least in part on the determined hardware and software requirements for the plurality of tasks. ("Storing all tasks with different resource requirements in a single task queue would lead to a complicated scheduling process, as a “look at the next task description and only dequeue if still fits the available resources” would interfere with parallel dequeue on all multi-processors, i.e., all schedulers may look at the same task and only one may be able to dequeue it. Thus, the proposed distributed scheduler may use multiple queues to order tasks according to their resource requirements in a hierarchical fashion.", 0105)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Steinberger in order to provide the system with multiple queues and a method for reducing latency (Steinberger, [0014]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Ren (US 10554738 B1) in view of Amer (US 20210281912 A1) in further view of Noll (US 20210216564 A1).
As per claim 14, Ren in view of Amer fully discloses the limitations of claim 1, but does not disclose client-side execution agents.
However, Noll discloses:
the set of client devices in the information technology infrastructure environment comprises a subset of a plurality of client devices in the information technology infrastructure environment which have client-side execution agents installed therein for facilitating the offload of the execution of said at least a subset of the plurality of tasks in the execution queue data structure from the one or more servers in the information technology infrastructure environment. (" In some implementations, a collaborative decision is reached, by the client-side agent 118 and the load processing location selector 116 regarding how much processing to offload to the client device 104.", 0035)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Ren in view of Amer with those of Noll in order to provide a client-side execution agent which allows the system to take advantage of client-side systems which may share a processing cost with the server (Noll, [0017]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Shuman (US 20140120889 A1) - discloses a system which receives, by the wireless user device, registration information for a plurality of client devices, receives, by the wireless user device, a call request for a call among two or more of the plurality of client devices, sets up, by the wireless user device, the call among the two or more client devices, receives, by the wireless user device, a media stream, and transmits, by the wireless user device, the media stream to at least one of the two or more client devices
Connelly (US 20130326220 A1) – discloses a system wherein a private message including encrypted data and identifying information is stored at a server in an agnostic manner without performing encryption or decryption.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS MICHAEL VINCENT whose telephone number is (703)756-1408. The examiner can normally be reached Mon-Fri 8:30AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.M.V./
Examiner, Art Unit 2196
/APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196