DETAILED ACTION
1. This Office Action is taken in response to Applicants’ Amendments and Remarks filed on 3/4/2026 regarding application 18/086,429 filed on 12/21/2022.
Claims 1-20 are pending for consideration.
2. Response to Amendments and Remarks
Applicants’ Remarks have been fully and carefully considered, with the Examiner’s response set forth below.
(1) In response to the remarks, an updated claim analysis has been made with newly identified reference(s). Refer to the corresponding sections of the following Office Action for details.
3. Examiner’s Note
(1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient.
(2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-20 are rejected under 35 U.S.C. 103 as being anticipated by Kerr et al. (US Patent Application Publication 2021/0124582, hereinafter Kerr), and in view of Sun et al. (US Patent Application Publication 2024/0143323, hereinafter Sun)
As to claim 1, Kerr teaches One or more processors [as shown in figures 8, 9A, 10A, 10B, and 10C; FIG. 8 illustrates a parallel processing unit (PPU) 300, in accordance with an embodiment. In an embodiment, the PPU 300 is a multi-threaded processor that is implemented on one or more integrated circuit devices. The PPU 300 is a latency hiding architecture designed to process many threads in parallel. A thread (e.g., a thread of execution) is an instantiation of a set of instructions configured to be executed by the PPU 300. In an embodiment, the PPU 300 is a graphics processing unit (GPU) configured to implement a graphics rendering pipeline for processing three-dimensional (3D) graphics data in order to generate two-dimensional (2D) image data for display on a display device such as a liquid crystal display (LCD) device. In other embodiments, the PPU 300 may be utilized for performing general-purpose computations (¶ 0130); Sun also teaches this limitation – processor, figure 7, 702] comprising: circuitry [as shown in figures 8, 9A, 10A, 10B, and 10C; Sun also teaches this limitation – as shown in figure 7] in response to an application programming interface (API) call [In an embodiment, a host processor executes a driver kernel that implements an application programming interface (API) that enables one or more applications executing on the host processor to schedule operations for execution on the PPU 300 … An application may generate instructions (e.g., API calls) that cause the driver kernel to generate one or more tasks for execution by the PPU 300 … Cooperating threads may refer to a plurality of threads including instructions to perform the task and that may exchange data through shared memory. Threads and cooperating threads are described in more detail in conjunction with FIG. 5A (¶ 0141); Cooperative Groups is a programming model for organizing groups of communicating threads that allows developers to express the granularity at which threads are communicating, enabling the expression of richer, more efficient parallel decompositions. Cooperative launch APIs support synchronization amongst thread blocks for the execution of parallel algorithms. Conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., the syncthreads( ) function). However, programmers would often like to define groups of threads at smaller than thread block granularities and synchronize within the defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces … (¶ 0175-0176);
Sun also teaches this limitation – API operations for asynchronous reduction operations as shown in figures 1A, 1B, 1C, 4, and 6] cause one or more asynchronous reduction operations [Kerr teaches reducing conflicting requests, which typically are caused by asynchronous activities, by synchronizing a group of threads -- FIG. 6 shows an example representation of how data retrieved from main or global memory can be laid out in shared memory to reduce conflicting requests (¶ 0020); … The data is “swizzled” as it is stored into shared memory to allow subsequent conflict-free access … (¶ 0044); As will be discussed in more detail below, example non-limiting embodiments allow for completion of the instruction to be tracked as an asynchronous copy/DMA independent of other memory operations … (¶ 0066); Each executing block of threads may have an allocated portion of the shared memory 574. The shared memory 574 is a software managed cache used to load data from global memory so that the number of off-chip memory accesses by the executing threads is reduced. The software explicitly allocates and accesses the shared memory 574. Threads in a thread block are synchronized (e.g., after cooperatively loading data from global memory into shared memory) to avoid critical resource use conflicts (¶ 0054); Data can be stored into the shared memory banks such that successive words of a returned cache line map to successive banks. FIG. 6 shows an example representation of how data from global memory can be laid out in shared memory to reduce conflicting requests … (¶ 0093); Cooperative Groups is a programming model for organizing groups of communicating threads that allows developers to express the granularity at which threads are communicating, enabling the expression of richer, more efficient parallel decompositions. Cooperative launch APIs support synchronization amongst thread blocks for the execution of parallel algorithms. Conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., the syncthreads( ) function). However, programmers would often like to define groups of threads at smaller than thread block granularities and synchronize within the defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces … (¶ 0175-0176);
Sun teaches asynchronous reduction operations by combining data objects and reducing the number of data objects – API operations for asynchronous reduction operations as shown in figures 1A, 1B, 1C, 4, and 6, where the transactions include authentication, rate-limiting, lodging, caching, and transformations; The disclosed technology describes example embodiments of a domain-specific programming language that are implemented, interpreted, and used to route messages (e.g., API requests, API responses) within a distributed microservices network, a distributed computing platform, a service-oriented system, and/or the like … In particular, routing refers to a core tenet of a microservice architecture, where external clients invoke certain microservices indirectly via a gateway and/or an exposed node of the microservice architecture. For example, an external client transmits an API request that identifies a particular microservice, service, application, and/or the like. The API request is received by a gateway and/or an exposed node, which then forwards the API request to the identified microservice, service, application, and/or the like … (¶ 0012-0013); FIG. 1A illustrates an example implementation with multiple APIs having functionalities common to one another. As shown in FIG. 1A, a client 102 is associated with APIs 104A, 104B, 104C, 104D, and 104E. Each API has a standard set of features or functionalities associated with it. For example, the standard set of functionalities associated with API 104A are “authentication” and “transformations.” The standard set of functionalities associated with API 104B are “authentication,” “rate-limiting,” “logging,” “caching,” and “transformations” … (¶ 0024-0027); For example, if multiple routing data objects result in the same service and plug-in config being used, the multiple routing data objects are combinable into a single routing data object. In particular, respective match expressions included in each of the multiple routing data objects are combinable by a logical OR operator to form a single match expression for a single routing data object. In some embodiments, the API gateway 400 is configured to identify combinable routing data objects and to recommend and/or automatically cause combination of the combinable routing data objects. As such, a number of routing data objects that are evaluated by the gateway is minimized, which improves matching performance at runtime (¶ 0082); The method of claim 1, further comprising reducing a number of routing data objects included in the plurality of routing data objects based on: determining that two routing data objects of the plurality of routing data objects are combinable, and defining a combined routing data object to replace the two routing data objects, wherein the combined routing data object includes a combined match expression that includes a logical combination, via an OR Boolean operator, of respective match expressions of the two routing data objects (claim 7)] based at least in part, on one or more input parameters to the API indicating a reduction operation to be performed [The graphics processing pipeline 600 may be implemented via an application executed by a host processor, such as a CPU. In an embodiment, a device driver may implement an application programming interface (API) that defines various functions that can be utilized by an application in order to generate graphical data for display. The device driver is a software program that includes a plurality of instructions that control the operation of the PPU 300. The API provides an abstraction for a programmer that lets a programmer utilize specialized graphics hardware, such as the PPU 300, to generate the graphical data without requiring the programmer to utilize the specific instruction set for the PPU 300. The application may include an API call that is routed to the device driver for the PPU 300. The device driver interprets the API call and performs various operations to respond to the API call. In some instances, the device driver may perform operations by executing instructions on the CPU. In other instances, the device driver may perform operations, at least in part, by launching operations on the PPU 300 utilizing an input/output interface between the CPU and the PPU 300. In an embodiment, the device driver is configured to implement the graphics processing pipeline 600 utilizing the hardware of the PPU 300 (¶ 0161);
Sun more expressively teaches indicating a reduction operation to be performed -- Disclosed embodiments relate to implementation and interpretation of an application programming interface (API) routing domain-specific programming language (DSL). The API routing DSL improves storage efficiency of routing definitions and rules, reduces errors for API messages unmatched to routes, and improves runtime performance for API message routing. In example embodiments, a routing data object configured according to the API routing DSL includes a match expression. The match expression is a logical combination of one or more attribute condition statements that each describe a relational comparison between an API message attribute and a specified value. Evaluation of the match expression as logically true using attributes of a given API message indicates that the given API messages matches the routing data object. The given API message is then routed according to an endpoint and/or policies associated with the routing data object (abstract); In some embodiments, aspects of the routing DSL include a set of combinatorial operators each configured to logically combine two (or more) attribute condition statements to form a match expression that results in one logical output. For example, the combinatorial operators include Boolean operators, parenthesis, and/or the like (¶ 0100); … For example, the routing data objects include match expressions that include a logical combination of one or more attribute condition statements. Each routing data object is associated with an upstream API and includes a match expression. The match expression includes a logical combination of one or more attribute condition statements … (¶ 0104); The method of claim 1, further comprising reducing a number of routing data objects included in the plurality of routing data objects based on: determining that two routing data objects of the plurality of routing data objects are combinable, and defining a combined routing data object to replace the two routing data objects, wherein the combined routing data object includes a combined match expression that includes a logical combination, via an OR Boolean operator, of respective match expressions of the two routing data objects (claim 7)], wherein the one or more asynchronous reduction operations combine multiple data to fewer data [this limitation is taught by Sun – API operations for asynchronous reduction operations as shown in figures 1A, 1B, 1C, 4, and 6; For example, if multiple routing data objects result in the same service and plug-in config being used, the multiple routing data objects are combinable into a single routing data object. In particular, respective match expressions included in each of the multiple routing data objects are combinable by a logical OR operator to form a single match expression for a single routing data object. In some embodiments, the API gateway 400 is configured to identify combinable routing data objects and to recommend and/or automatically cause combination of the combinable routing data objects. As such, a number of routing data objects that are evaluated by the gateway is minimized, which improves matching performance at runtime (¶ 0082); A method for routing application programming interface (API) requests within a microservices architecture, the method comprising: receiving an API request that includes a plurality of message attributes; accessing a plurality of routing data objects, each routing data object being associated with an upstream API of the microservices architecture and including a match expression … (claim 1); The method of claim 1, further comprising reducing a number of routing data objects included in the plurality of routing data objects based on: determining that two routing data objects of the plurality of routing data objects are combinable, and defining a combined routing data object to replace the two routing data objects, wherein the combined routing data object includes a combined match expression that includes a logical combination, via an OR Boolean operator, of respective match expressions of the two routing data objects (claim 7)], and wherein the circuitry, at least in part, uses manual transaction accounting comprising one or more functions that track when a count of the performed one or more asynchronous reduction operations equals an expected count of the one or more asynchronous reduction operations to determine when the one or more asynchronous reduction operations are complete [this limitation is taught by Sun – as shown in figure 6, where steps 602-608 perform the asynchronous reduction operations, and when the asynchronous reduction operations are completed in step 610, the reduced API message is routed; The disclosed technology describes example embodiments of a domain-specific programming language that are implemented, interpreted, and used to route messages (e.g., API requests, API responses) within a distributed microservices network, a distributed computing platform, a service-oriented system, and/or the like … In particular, routing refers to a core tenet of a microservice architecture, where external clients invoke certain microservices indirectly via a gateway and/or an exposed node of the microservice architecture. For example, an external client transmits an API request that identifies a particular microservice, service, application, and/or the like. The API request is received by a gateway and/or an exposed node, which then forwards the API request to the identified microservice, service, application, and/or the like … (¶ 0012-0013); FIG. 1A illustrates an example implementation with multiple APIs having functionalities common to one another. As shown in FIG. 1A, a client 102 is associated with APIs 104A, 104B, 104C, 104D, and 104E. Each API has a standard set of features or functionalities associated with it. For example, the standard set of functionalities associated with API 104A are “authentication” and “transformations.” The standard set of functionalities associated with API 104B are “authentication,” “rate-limiting,” “logging,” “caching,” and “transformations” … (¶ 0024-0027); Beyond user creation of routing data objects, the API gateway 400 provides other administrative functionality relating to routes and routing data objects. In some embodiments, the API gateway 400 enables an authorized user (e.g., an administrator) to view a specific routing data object. When created, routing data objects are assigned with a unique identifier (e.g., an incrementing number, a globally unique identifier, a universally unique identifier, a universal resource identifier or locator, a name string). As such, an authorized user is able to make a request that include the unique identifier for a specific routing data object to view the routing data object. In some examples, the authorized user makes a HTTP GET request to the URL of a specific routing data object, and the API gateway 400 indicates (e.g., causes display at the user client) the match expression of the specific routing data object (¶ 0074); FIG. 6 is a flowchart illustrating a method of routing API messages (e.g., API requests, API responses) within a microservices architecture
… At 602, the API gateway receives an API message that includes a plurality of attributes … At 604, the API gateway accesses a plurality of routing data objects, for example, stored in a data store accessible by the gateway. The routing data objects are configured according to the routing DSL … At 606, the API gateway performs at least one matching operation between the API message and the plurality of routing data objects … At 608, the API gateway identifies a particular routing data object based on the at least one matching operation … At 610, the API gateway routes the API message according to the particular routing data object. For example, based on the API message being an API request, the gateway routes the API message to the service and/or upstream API associated with the particular routing data object … (¶ 0102-0107);
The method of claim 1, further comprising reducing a number of routing data objects included in the plurality of routing data objects based on: determining that two routing data objects of the plurality of routing data objects are combinable, and defining a combined routing data object to replace the two routing data objects, wherein the combined routing data object includes a combined match expression that includes a logical combination, via an OR Boolean operator, of respective match expressions of the two routing data objects (claim 7)].
Regarding claim 1, Kerr does not expressively teach the one or more asynchronous reduction operations combine multiple data to fewer data.
However, Sun specifically teaches the one or more asynchronous reduction operations combine multiple data to fewer data [API operations for asynchronous reduction operations as shown in figures 1A, 1B, 1C, 4, and 6; For example, if multiple routing data objects result in the same service and plug-in config being used, the multiple routing data objects are combinable into a single routing data object. In particular, respective match expressions included in each of the multiple routing data objects are combinable by a logical OR operator to form a single match expression for a single routing data object. In some embodiments, the API gateway 400 is configured to identify combinable routing data objects and to recommend and/or automatically cause combination of the combinable routing data objects. As such, a number of routing data objects that are evaluated by the gateway is minimized, which improves matching performance at runtime (¶ 0082); A method for routing application programming interface (API) requests within a microservices architecture, the method comprising: receiving an API request that includes a plurality of message attributes; accessing a plurality of routing data objects, each routing data object being associated with an upstream API of the microservices architecture and including a match expression … (claim 1); The method of claim 1, further comprising reducing a number of routing data objects included in the plurality of routing data objects based on: determining that two routing data objects of the plurality of routing data objects are combinable, and defining a combined routing data object to replace the two routing data objects, wherein the combined routing data object includes a combined match expression that includes a logical combination, via an OR Boolean operator, of respective match expressions of the two routing data objects (claim 7)].
In addition, Sun also expressively teaches wherein the circuitry, at least in part, uses manual transaction accounting comprising one or more functions that track when a count of the performed one or more asynchronous reduction operations equals an expected count of the one or more asynchronous reduction operations to determine when the one or more asynchronous reduction operations are complete [as shown in figure 6, where steps 602-608 perform the asynchronous reduction operations, and when the asynchronous reduction operations are completed in step 610, the reduced API message is routed; The disclosed technology describes example embodiments of a domain-specific programming language that are implemented, interpreted, and used to route messages (e.g., API requests, API responses) within a distributed microservices network, a distributed computing platform, a service-oriented system, and/or the like … In particular, routing refers to a core tenet of a microservice architecture, where external clients invoke certain microservices indirectly via a gateway and/or an exposed node of the microservice architecture. For example, an external client transmits an API request that identifies a particular microservice, service, application, and/or the like. The API request is received by a gateway and/or an exposed node, which then forwards the API request to the identified microservice, service, application, and/or the like … (¶ 0012-0013); FIG. 1A illustrates an example implementation with multiple APIs having functionalities common to one another. As shown in FIG. 1A, a client 102 is associated with APIs 104A, 104B, 104C, 104D, and 104E. Each API has a standard set of features or functionalities associated with it. For example, the standard set of functionalities associated with API 104A are “authentication” and “transformations.” The standard set of functionalities associated with API 104B are “authentication,” “rate-limiting,” “logging,” “caching,” and “transformations” … (¶ 0024-0027); Beyond user creation of routing data objects, the API gateway 400 provides other administrative functionality relating to routes and routing data objects. In some embodiments, the API gateway 400 enables an authorized user (e.g., an administrator) to view a specific routing data object. When created, routing data objects are assigned with a unique identifier (e.g., an incrementing number, a globally unique identifier, a universally unique identifier, a universal resource identifier or locator, a name string). As such, an authorized user is able to make a request that include the unique identifier for a specific routing data object to view the routing data object. In some examples, the authorized user makes a HTTP GET request to the URL of a specific routing data object, and the API gateway 400 indicates (e.g., causes display at the user client) the match expression of the specific routing data object (¶ 0074); FIG. 6 is a flowchart illustrating a method of routing API messages (e.g., API requests, API responses) within a microservices architecture … At 602, the API gateway receives an API message that includes a plurality of attributes … At 604, the API gateway accesses a plurality of routing data objects, for example, stored in a data store accessible by the gateway. The routing data objects are configured according to the routing DSL … At 606, the API gateway performs at least one matching operation between the API message and the plurality of routing data objects … At 608, the API gateway identifies a particular routing data object based on the at least one matching operation … At 610, the API gateway routes the API message according to the particular routing data object. For example, based on the API message being an API request, the gateway routes the API message to the service and/or upstream API associated with the particular routing data object … (¶ 0102-0107); The method of claim 1, further comprising reducing a number of routing data objects included in the plurality of routing data objects based on: determining that two routing data objects of the plurality of routing data objects are combinable, and defining a combined routing data object to replace the two routing data objects, wherein the combined routing data object includes a combined match expression that includes a logical combination, via an OR Boolean operator, of respective match expressions of the two routing data objects (claim 7)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention combine multiple data to fewer data for asynchronous reduction operations, as specifically demonstrated by Sun, and to incorporate it into the existing scheme disclosed by Kerr, because Sun teaches doing so improves runtime performance for API message routing [Disclosed embodiments relate to implementation and interpretation of an application programming interface (API) routing domain-specific programming language (DSL). The API routing DSL improves storage efficiency of routing definitions and rules, reduces errors for API messages unmatched to routes, and improves runtime performance for API message routing … (abstract)].
As to claim 2, Kerr in view of Sun teaches The one or more processors of claim 1, wherein the one or more asynchronous reduction operations are to be performed by a graphics processing unit (GPU) [Kerr -- In an embodiment, the PPU 300 comprises a graphics processing unit (GPU). The PPU 300 is configured to receive commands that specify shader programs for processing graphics data … An application writes model data for a scene (e.g., a collection of vertices and attributes) to a memory such as a system memory or memory 304. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed … (¶ 0146-0147)].
As to claim 3, Kerr in view of Sun teaches The one or more processors of claim 2, wherein the asynchronous reduction operations are performed on information that includes data from a first memory of the GPU and a second memory of the GPU [Kerr -- Modern processing chip(s) such as GPUs typically contain significant amounts of memory near the parallel processors—reducing memory access latency. For example, as of this filing, some NVIDIA GPUs contain on the order of 12 GB or more of local on-chip high bandwidth memory (including e.g., 4 GB of cache/shared memory) to serve over 5000 cores operating in parallel (¶ 0005); FIGS. 1A-3B illustrates an example non-limiting parallel processing architecture showing software controlled transfer of data from the global memory 511 to shared memory 574 accessible by a plurality of functional units 512.sub.0-512.sub.N … (¶ 0050); In an embodiment, the PPU 300 comprises a graphics processing unit (GPU). The PPU 300 is configured to receive commands that specify shader programs for processing graphics data … An application writes model data for a scene (e.g., a collection of vertices and attributes) to a memory such as a system memory or memory 304. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed … (¶ 0146-0147)].
As to claim 4, Kerr in view of Sun teaches The one or more processors of claim 1, wherein the one or more asynchronous reduction operations are to move information between shared memory of a graphics processing unit (GPU) and global memory of the GPU [Kerr -- A technique for block data transfer is disclosed that reduces data transfer and memory access overheads and significantly reduces multiprocessor activity and energy consumption. Threads executing on a multiprocessor needing data stored in global memory can request and store the needed data in on-chip shared memory, which can be accessed by the threads multiple times. The data can be loaded from global memory and stored in shared memory using an instruction which directs the data into the shared memory without storing the data in registers and/or cache memory of the multiprocessor during the data transfer (abstract)].
As to claim 5, Kerr in view of Sun teaches The one or more processors of claim 1, wherein the API is to receive one or more inputs indicating a source memory location and a destination memory location of the one or more asynchronous reduction operations [Kerr -- Example non-limiting embodiments provide a fused load and store instruction (LDGSTS) which can load data from the global memory (LDG) and store the data into the shared memory (STS) bypasses the processor core register … The LDGSTS.ACCESS and LDGSTS.BYPASS instructions may include two address operands, a destination shared memory address and a source global address … (¶ 0040); To optimize loading data into shared memory, example non-limiting embodiments eliminate the need to stage data through SM's register file. In example non-limiting embodiments, this is implemented as a single LDGSTS.ACCESS instruction, with two address operands, a destination shared memory address and a source global address (¶ 0064)].
As to claim 6, Kerr in view of Sun teaches The one or more processors of claim 1, wherein the API is to receive information indicating a shape of information corresponding to the one or more asynchronous reduction operations [Kerr -- … In an embodiment, the PPU 300 is a graphics processing unit (GPU) configured to implement a graphics rendering pipeline for processing three-dimensional (3D) graphics data in order to generate two-dimensional (2D) image data for display on a display device such as a liquid crystal display (LCD) device … (¶ 0130); In an embodiment, a host processor executes a driver kernel that implements an application programming interface (API) that enables one or more applications executing on the host processor to schedule operations for execution on the PPU 300 … (¶ 0141)].
As to claim 7, Kerr in view of Sun teaches The one or more processors of claim 1, wherein the API is to provide an indication of whether a type of hardware unit is used to perform the one or more asynchronous reduction operations [Kerr -- An application writes model data for a scene (e.g., a collection of vertices and attributes) to a memory such as a system memory or memory 304. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed … In an embodiment, the graphics processing pipeline 600 may represent a graphics processing pipeline defined by the OpenGL® API. As an option, the graphics processing pipeline 600 may be implemented in the context of the functionality and architecture of the previous Figures and/or any subsequent Figure(s) (¶ 0147-0148); … The API provides an abstraction for a programmer that lets a programmer utilize specialized graphics hardware, such as the PPU 300, to generate the graphical data without requiring the programmer to utilize the specific instruction set for the PPU 300 … (¶ 0161)].
As to claim 8, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details.
As to claim 9, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 10, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 11, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details.
As to claim 12, Kerr in view of Sun teaches The system of claim 8, wherein the API is to receive as input an identifier of a synchronization object [Kerr -- Cooperative Groups is a programming model for organizing groups of communicating threads that allows developers to express the granularity at which threads are communicating, enabling the expression of richer, more efficient parallel decompositions. Cooperative launch APIs support synchronization amongst thread blocks for the execution of parallel algorithms. Conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., the syncthreads( ) function). However, programmers would often like to define groups of threads at smaller than thread block granularities and synchronize within the defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces … (¶ 0175-0176);
Sun -- Beyond user creation of routing data objects, the API gateway 400 provides other administrative functionality relating to routes and routing data objects. In some embodiments, the API gateway 400 enables an authorized user (e.g., an administrator) to view a specific routing data object. When created, routing data objects are assigned with a unique identifier (e.g., an incrementing number, a globally unique identifier, a universally unique identifier, a universal resource identifier or locator, a name string). As such, an authorized user is able to make a request that include the unique identifier for a specific routing data object to view the routing data object. In some examples, the authorized user makes a HTTP GET request to the URL of a specific routing data object, and the API gateway 400 indicates (e.g., causes display at the user client) the match expression of the specific routing data object (¶ 0074)].
As to claim 13, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 14, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details.
As to claim 15, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 16, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 17, it recites substantially the same limitations as in claim 7, and is rejected for the same reasons set forth in the analysis of claim 7. Refer to “As to claim 7” presented earlier in this Office Action for details.
As to claim 18, it recites substantially the same limitations as in claim 12, and is rejected for the same reasons set forth in the analysis of claim 12. Refer to “As to claim 12” presented earlier in this Office Action for details.
As to claim 19, Kerr in view of Sun teaches The method of claim 14, wherein the API is to receive as input information indicating a plurality of characteristics of data to be transformed [Kerr -- An application writes model data for a scene (e.g., a collection of vertices and attributes) to a memory such as a system memory or memory 304. The model data defines each of the objects that may be visible on a display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed … In an embodiment, the graphics processing pipeline 600 may represent a graphics processing pipeline defined by the OpenGL® API. As an option, the graphics processing pipeline 600 may be implemented in the context of the functionality and architecture of the previous Figures and/or any subsequent Figure(s) (¶ 0147-0148); … The API provides an abstraction for a programmer that lets a programmer utilize specialized graphics hardware, such as the PPU 300, to generate the graphical data without requiring the programmer to utilize the specific instruction set for the PPU 300 … (¶ 0161);
Sun -- … For example, the routing data objects include match expressions that include a logical combination of one or more attribute condition statements. Each routing data object is associated with an upstream API and includes a match expression. The match expression includes a logical combination of one or more attribute condition statements … (¶ 0104)].
As to claim 20, Kerr in view of Sun teaches A non-transitory computer-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least perform the method of claim 14 [Kerr -- Computer programs, or computer control logic algorithms, may be stored in the main memory 540 and/or the secondary storage. Such computer programs, when executed, enable the system 565 to perform various functions. The memory 540, the storage, and/or any other storage are possible examples of computer-readable media (¶ 0203);
Sun – processor, figure 7, 702].
Conclusion
5. Claims 1-20 are rejected as explained above.
6. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE
MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on 571-272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/SHENG JEN TSAI/Primary Examiner, Art Unit 2136