DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1–20 are presented for examination in a non-provisional application filed on 07/06/2023.
Drawings
3. The drawings were received on 07/06/2023 (in the filings). These drawings are acceptable.
Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
5. Claims 1–20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1–20 of Copending Application No. 18/219,013, and further in view of Singh et al., US 9,256,467 B1 (“Singh”).
Singh is cited and applied below, to reject claims 1–20 under § 103, for teaching or suggesting the limitation “perform a first application programming interface (API) … to terminate performance of one or more software workloads identified by the first API.”
It would have been obvious to a person of ordinary skill in the art to combine the claims of the reference patent with the teachings of Singh to provide for the launching and termination of workloads.
6. Although the claims at issue are not identical, they are not patentably distinct (nonobvious) from each other, because at least some of the subject matter claimed in the instant application is already fully disclosed in the copending applications.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
For purposes of illustration, a table has been constructed below to compare the two independent system claims and exemplary dependent claims.
Instant Application No. 18/219,017
Copending Application No. 18/219,013
1. A processor, comprising: one or more circuits to
perform a first application programming interface (API) to select a second API to terminate performance of one or more software workloads identified by the first API.
(see Singh, below, teaching “perform a first application programming interface (API) … to terminate performance of one or more software workloads identified by the first API”)
1. A processor, comprising: one or more circuits to:
perform a first application programming interface (API), responsive to an API call including one or more arguments identifying one or more software workloads to be monitored, to select a second API to monitor performance of the one or more software workloads identified by the API call; and
obtain, via the selected second APL status information for the identified one or more software workloads running on multiple compute nodes connected via one or more networks.
2. The processor of claim 1, wherein the first API is to receive one or more input values indicating one or more job identifiers of the one or more software workloads.
2. The processor of claim 1, wherein the first API is to receive one or more input values indicating one or more job identifiers of the one or more software workloads.
…
…
7. The processor of claim 1, wherein the second API is to provide one or more output values indicating one or more statuses of the one or more software workloads based, at least in part, on performing the second API to terminate performance.
7. The processor of claim 1, wherein the second API is to provide one or more output values indicating one or more workload statuses of the one or more software workloads.
7. Claims 1–2, 8–9, and 15–16 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1–2, 9–10, and 15–16 of Copending Application No. 18/219,011, and further in view of Singh et al., US 9,256,467 B1 (“Singh”).
Singh is cited and applied below, to reject claims 1–20 under § 103, for teaching or suggesting the limitation “perform a first application programming interface (API) … to terminate performance of one or more software workloads identified by the first API.”
It would have been obvious to a person of ordinary skill in the art to combine the claims of the reference patent with the teachings of Singh to provide for the launching and termination of workloads.
8. Although the claims at issue are not identical, they are not patentably distinct (nonobvious) from each other, because at least some of the subject matter claimed in the instant application is already fully disclosed in the copending applications.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
For purposes of illustration, a table has been constructed below to compare the two independent system claims.
Instant Application No. 18/219,017
Copending Application No. 18/219,011
1. A processor, comprising: one or more circuits to
perform a first application programming interface (API) to select a second API to terminate performance of one or more software workloads identified by the first API.
(see Singh, below, teaching “perform a first application programming interface (API) … to terminate performance of one or more software workloads identified by the first API”)
1. A processor, comprising: one or more circuits to:
cause a first application programming interface (API) to select a second API to perform one or more software workloads identified by the first APL.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
9. Claims 5–6 and 12–13 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
a. Specifically, the following term(s) and/or phrase(s) in the claim language is/are indefinite.
i. As to claims 5–6 and 12–13, the term “high-performance computing system…” is a relative term which renders the claim indefinite.
The term “high-performance” is not defined by the claims, and the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
To proceed in examination, the term “high-performance computing system” is interpreted to be a “computing system” capable of performing (executing) the one or more software workloads.
b. Appropriate corrections are therefore required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
10. Claims 1–3, 5–6, 8–10, 12–13, 15–16, and 18–19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
11. As to independent claim 1, the claim recites:
“to SELECT a second API to terminate performance of one or more software workloads identified by the first API.”
As to independent claim 8 and dependent claim 15, they recite similar language of commensurate scope as claim 1.
These limitations, as currently drafted and within their respective claim, represent processes that, under a broadest reasonable interpretation, covers performance in the mind (including observation, evaluation, judgment, opinion, etc.) but for the recitation of generic computer components.
That is, other than reciting the use of
“one or more circuits to perform a first application programming interface (API)” (claim 1);
“one or more processors and memory to store executable instructions … to perform a first application programming interface (API)” (claim 8); and
“a first application programming interface (API)” (claim 15),
to perform these steps, nothing in the claim element precludes the step from practically being performed in the mind or using pencil and paper (see MPEP 2106.04(a)(2) – Examples of Concepts The Courts Have Identified As Abstract Ideas, discussing abstract ideas or concepts relating to organizing or analyzing information in a way that can be performed mentally or is analogous to human mental work).
For example, but for the use of generic computers,
the performance of these steps in the context of the claims reasonably encompasses the user mentally and/or manually performing the steps of mentally
1) select a second API (software library) to perform a certain function.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application (under Prong Two of Step 2A)
(I) Generic Computing Device
For instance, claim 1 recites the additional element of
“one or more circuits to perform a first application programming interface (API)”
claim 8 recites the additional element of
“one or more processors and memory to store executable instructions … to perform a first application programming interface (API)” and
claim 15 recites the additional element of
“a first application programming interface (API)”
that perform these steps.
These computer components, functionalities, and/or services are all recited at a high-level of generality (i.e., as a generic computing device performing one or more generic computer functions) such that it amounts no more than mere instructions to apply the exception using a generic computer components such as processors, basic processor instructions and/or software components or programs.
Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
(II) Particular Technological Environment or Field Of Use
As shown above, the claims also include the elements of:
(1) “one or more software workloads.”
These exemplary elements however merely describes the general technical or computing environment (within which the claimed steps or processes operate) and restrict the processed information or data to a particular type or category (without imposing any functional claim limitations, activities, or steps).
Limitations that generally link the use of the judicial exception to a particular technological environment or field of use, neither meaningfully limit the claim nor transform (the abstract idea nature of) the claim to a particular useful application to improve the functioning of a computer or any other technology.
Under Step 2B of the 101 analysis:
The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea.
As claimed, the
“one or more circuits to perform a first application programming interface (API),”
“one or more processors and memory to store executable instructions … to perform a first application programming interface (API)” and
“a first application programming interface (API)”
merely encompasses generic computing components (e.g. processors, software components, programs, routines and interfaces) recited at a high-level of generality, executing one or more steps of the claims.
Accordingly, the additional step(s) or element(s) of the claims, viewed individually and as an ordered combination, added nothing to the implementation of a mental process on an unspecified, “generic” computer and therefore failed to transform the abstract idea nature of the claims into a patent-eligible application.
12. As to dependent claims 2–3, 5–6, 9–10, 12–13, 16, and 18–19, each of these claims either (1) recites additional step(s) that covers performance in the mind; or (2) merely restricts or links the process step, information or data to a particular type, technological environment, or field of use; (3) amounts to insignificant extra-solution activity to the judicial exception such as data input and output/transmission; or (4) recites a function which amounts to no more than a recitation of the words “apply it” (or an equivalent) and is no more than mere instructions to implement an abstract idea or other exception on a computer; and thus as a whole is also directed and confined to the same process set forth in claims 1, 8, and 15. Therefore, these claims do not individually or collectively add an inventive concept or additional element(s) amounting to significantly more than the abstract idea itself. These claims are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more.
For instance, dependent claim 2 reciting “wherein the first API is to receive one or more input values indicating one or more job identifiers of the one or more software workloads” merely recites additional step(s) amounting to insignificant extra-solution activity to the judicial exception such as data input and further restricts or links the process step, information or data to a particular type, technological environment, or field of use.
Dependent claim 3 reciting “wherein the one or more software workloads are to be identified by the first API based, at least in part, on an output value of a third API to perform the one or more software workloads” merely recites additional step(s) that covers performance in the mind and further restricts or links the process step, information or data to a particular type, technological environment, or field of use.
Dependent claim 5 reciting “wherein the one or more software workloads are performed using a high-performance computing system” merely restricts or links the process step, information or data to a particular type, technological environment, or field of use (i.e. describing the environment in which workloads are being performed).
Dependent claim 6 reciting “wherein the one or more software workloads are performed using one or more nodes of a high-performance computing system” merely restricts or links the process step, information or data to a particular type, technological environment, or field of use (i.e. describing the environment in which workloads are being performed).
As to dependent claims 9–10, 12–13, 16, and 18–19, they are the corresponding system and computer program product claims correspond to at least one of claims 2–3 and 5–6. Therefore, these claims do not individually or collectively 1) integrated the abstract idea into a practical application, nor do they 2) include additional element(s) amounting to significantly more than the abstract idea itself.
Practical Application Integration
Claims 4, 7, 11, 14, 17, and 20 includes element(s) integrating the abstract idea into a practical application.
Examiner’s Remarks
13. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution.
Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention.
Abbreviations
14. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s):
i. figure / figures: Fig. / Figs.
ii. column / columns: Col. / Cols.
iii. page / pages: p. / pp.
References Cited
15. (A) Singh et al., US 9,256,467 B1 (“Singh”).
(B) Kocyan et al., US 2009/0217311 A1 (“Kocyan”).
(C) Harwood et al., US 2020/0142753 A1 (“Harwood”).
Notice re prior art available under both pre-AIA and AIA
16. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A.
17. Claims 1–4, 7–11, 14–17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Singh in view of (B) Kocyan.
See “References Cited” section, above, for full citations of references.
18. Regarding claim 1, (A) Singh teaches/suggests the invention substantially as claimed, including:
“A processor, comprising:
one or more circuits to”
(Col. 41: code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors;
the Examiner notes that all processors (CPUs) have circuits);
“perform a first application programming interface (API) … to terminate performance of one or more software workloads identified by the first API”
(Fig. 11 and Col. 29, lines 20–37: a stop task request is received that specifies a task to stop, the requestor is authenticated, and the specified task is stopped thereby freeing resources allocated to the task. In 1102, a computing resource service provider receives an application programming interface call to stop a running task. In some cases, the application programming interface call may be received from a customer or other entity external to the container service to the front end service. In other cases, the container agent may make the application programming interface call in response to a communication from a scheduler to stop a task. The Stop Task may receive, as a parameter, one or more task IDs of running tasks).
Singh does not teach “a first application programming interface (API) to select a second API to ….”
(B) Kocyan, in the context of Singh’s teachings, however teaches or suggests:
“a first application programming interface (API) to select a second API to ….”
(¶ 47: The function receiving module receives a first function call from the financial calling application 104 that sends and receives data according to a first API. The first API may be a Q Series API or an L Series API;
¶ 49: The function converting module 204 converts the first function call according to the first API into a second function call according to a second API. The second API may be the O Series API. The second function call is compatible with the second API;
¶ 41: The interface translator 110 receives the invocation of specific functions within a Vertex legacy API like the Q Series API or the L series API;
¶ 42: An adapter pattern, also known as a wrapper or wedge, is a software programming design principal which allows programs with normally incompatible interfaces to work together by wrapping an interface compatible with the calling program around the interface of the program being called;
the Examiner notes: invoking specific functions of a first API necessitates identifying or selecting a corresponding function call of a second API in order to convert the first function call according to the first API into a second function call according to a second API).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (B) Kocyan with those of (A) Singh to receive and convert a first function call to perform a specific task into a corresponding second function call with different APIs. The motivation or advantage to do so is to ensure the interoperability of the distributed and/ or virtual computing environments utilizing different computer systems, configurations, protocols, and/or interfaces.
19. Regarding claim 2, Singh teaches or suggests:
“wherein the first API is to receive one or more input values indicating one or more job identifiers of the one or more software workloads”
(Fig. 11 and Col. 29, lines 20–37: a stop task request is received that specifies a task to stop, the requestor is authenticated, and the specified task is stopped thereby freeing resources allocated to the task. In 1102, a computing resource service provider receives an application programming interface call to stop a running task. In some cases, the application programming interface call may be received from a customer or other entity external to the container service to the front end service. In other cases, the container agent may make the application programming interface call in response to a communication from a scheduler to stop a task. The Stop Task may receive, as a parameter, one or more task IDs of running tasks).
20. Regarding claim 3, Singh and Kocyan teach or suggest:
“wherein the one or more software workloads are to be identified by the first API based, at least in part, on an output value of a third API to perform the one or more software workloads”
(Singh, Fig. 10 and Col. 27, lines 15–21: requestor may also specify one or more cluster IDs as parameters to the Start Task application programming interface call to indicate into which clusters the task or tasks should be started. Similarly, the requestor may specify one or more container instance IDs as parameters to the StartTask application programming interface call to indicate into which container instances the task or tasks should be started;
Col. 28, lines 47–50: As noted, launching the container image or specified application into the container instance may include generating one or more task IDs for the tasks, storing the task IDs in a data store;
Kocyan, ¶ 56: returning module 212 returns the second data result to the calling application 104 according to the first APL In one embodiment, the returning module 212 returns the second data result to the calling application 104 in response to the invocation by the calling application).
21. Regarding claim 4, Singh and Kocyan teach or suggest:
“wherein the one or more software workloads are to be identified by the first API based, at least in part, on performing a third API to launch the one or more software workloads”
(Singh, Fig. 10 and Col. 27, lines 15–21: requestor may also specify one or more cluster IDs as parameters to the Start Task application programming interface call to indicate into which clusters the task or tasks should be started. Similarly, the requestor may specify one or more container instance IDs as parameters to the StartTask application programming interface call to indicate into which container instances the task or tasks should be started;
Col. 28, lines 47–50: As noted, launching the container image or specified application into the container instance may include generating one or more task IDs for the tasks, storing the task IDs in a data store;
Kocyan, ¶ 56: returning module 212 returns the second data result to the calling application 104 according to the first APL In one embodiment, the returning module 212 returns the second data result to the calling application 104 in response to the invocation by the calling application).
22. Regarding claim 7, Singh and Kocyan teach or suggest:
“wherein the second API is to provide one or more output values indicating one or more statuses of the one or more software workloads based, at least in part, on performing the second API to terminate performance”
(Singh, Col. 29, line 65 to Col. 30, line 8: specified running task or tasks may be stopped and the resources previously allocated to the task or tasks may be freed/garbage collected … The requestor may also be notified that the task associated with the task ID has been successfully stopped;
Kocyan, ¶ 56: returning module 212 returns the second data result to the calling application 104 according to the first APL In one embodiment, the returning module 212 returns the second data result to the calling application 104 in response to the invocation by the calling application).
23. Regarding claims 8–11 and 14, they are a corresponding system claims reciting similar limitations of commensurate scope as the system (processor) of claims 1–4 and 7, respectively. Therefore, they are rejected on the same basis as claims 1–4 and 7 above, including the following rationale:
Singh teaches or suggests: ”one or more processors and memory to store executable instructions that, if performed by the one or more processors, cause the one or more processors to…”
(Col. 41: code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors;
Claim 5: one or more processors; memory including instructions that, when executed by the one or more processors).
24. Regarding claims 15–17 and 20, they are the corresponding method claims reciting similar limitations of commensurate scope as the system of claims 1–2, 4 and 7, respectively. Therefore, they are rejected on the same basis as claims 1–2, 4 and 7 above.
B.
25. Claims 5–6, 12–13, and 18–19 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Singh in view of (B) Kocyan, as applied to claims 1, 8, and 15 above, and further in view of (C) Harwood.
26. Regarding claim 5, Singh and Kocyan do not teach:
“wherein the one or more software workloads are performed using a high-performance computing system.”
(C) Harwood, in the context of Singh and Kocyan’s teachings, however teaches or suggests:
“wherein the one or more software workloads are performed using a high-performance computing system”
(Fig. 1 and ¶ 14: The hardware accelerator devices 166 include one or more types of hardware accelerator devices including, but not limited to, GPUs, FPGAs, ASICs, TPUs, IPUs, and other types of hardware accelerator devices and systems that are configured to support high-performance computing services provided by the accelerator service platform 130;
¶ 15: The accelerator APis 162 provide libraries, drivers, pre-written code, classes, procedures, scripts, configuration data, etc., which (i) can be called or otherwise utilized by the accelerator devices 164 during execution of workloads (e.g., deep learning model training tasks) by the server nodes 160;
¶ 20: accelerator service platform 130 can be a private or public cloud computing system which implements an XaaS system to provide computing services to end-users or customers for HPC applications such as deep learning applications, machine learning).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (C) Harwood with those of Singh and Kocyan to request, schedule, and execute tasks on a high-performance computing environment. The motivation or advantage to do so is to provide for efficient distribution and execution of applications/tasks requiring computing servers with better performances (e.g. higher processing capability, better quality of service, reliability, etc.) and/or hardware resources optimized for specific functions (e.g. accelerators, XaaS).
27. Regarding claim 6, Harwood teaches or suggests:
“wherein the one or more software workloads are performed using one or more nodes
of a high-performance computing system”
(Fig. 1 and ¶ 14: The hardware accelerator devices 166 include one or more types of hardware accelerator devices including, but not limited to, GPUs, FPGAs, ASICs, TPUs, IPUs, and other types of hardware accelerator devices and systems that are configured to support high-performance computing services provided by the accelerator service platform 130;
¶ 15: The accelerator APis 162 provide libraries, drivers, pre-written code, classes, procedures, scripts, configuration data, etc., which (i) can be called or otherwise utilized by the accelerator devices 164 during execution of workloads (e.g., deep learning model training tasks) by the server nodes 160;
¶ 20: accelerator service platform 130 can be a private or public cloud computing system which implements an XaaS system to provide computing services to end-users or customers for HPC applications such as deep learning applications, machine learning).
28. Regarding claims 12–13, they are a corresponding system claims reciting similar limitations of commensurate scope as the system (processor) of claims 5–6, respectively. Therefore, they are rejected on the same basis as claims 5–6 above.
29. Regarding claim 18, Harwood teaches or suggests:
“wherein the one or more software workloads are performed using a deep-learning computing system”
(Fig. 1 and ¶ 14: The hardware accelerator devices 166 include one or more types of hardware accelerator devices including, but not limited to, GPUs, FPGAs, ASICs, TPUs, IPUs, and other types of hardware accelerator devices and systems that are configured to support high-performance computing services provided by the accelerator service platform 130;
¶ 15: The accelerator APis 162 provide libraries, drivers, pre-written code, classes, procedures, scripts, configuration data, etc., which (i) can be called or otherwise utilized by the accelerator devices 164 during execution of workloads (e.g., deep learning model training tasks) by the server nodes 160;
¶ 20: accelerator service platform 130 can be a private or public cloud computing system which implements an XaaS system to provide computing services to end-users or customers for HPC applications such as deep learning applications, machine learning).
30. Regarding claim 19, Harwood teaches or suggests:
“wherein the one or more software workloads are performed using one or more nodes of a deep-learning computing system”
(Fig. 1 and ¶ 14: The hardware accelerator devices 166 include one or more types of hardware accelerator devices including, but not limited to, GPUs, FPGAs, ASICs, TPUs, IPUs, and other types of hardware accelerator devices and systems that are configured to support high-performance computing services provided by the accelerator service platform 130;
¶ 15: The accelerator APis 162 provide libraries, drivers, pre-written code, classes, procedures, scripts, configuration data, etc., which (i) can be called or otherwise utilized by the accelerator devices 164 during execution of workloads (e.g., deep learning model training tasks) by the server nodes 160;
¶ 20: accelerator service platform 130 can be a private or public cloud computing system which implements an XaaS system to provide computing services to end-users or customers for HPC applications such as deep learning applications, machine learning).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENJAMIN C WU/Primary Examiner, Art Unit 2195
December 6, 2025