DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/18/2025 has been entered.
Status of Claims
Claims 1-16 are currently pending in the present application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-5, 7, 9-10, 12-13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharjee et al. (US PGPGPUB No. 2019/0272271; Pub. Date: Sep. 5, 2019) in view of Chandramouli et al. (US PGPUB No. 2010/0030896; Pub. Date; Feb. 4, 2010).
Regarding independent claim 1,
Bhattacharjee discloses a database system comprising: a plurality of computing device clusters, wherein a computing device cluster of the plurality of computing device clusters includes a plurality of computing devices, See FIG. 1A & Paragraph [0188], (Disclosing a system for assigning a processing task from one component of a data intake and query system to a different component of said system. FIG. 1A illustrates data intake and query system 16 which comprises worker nodes 14-1, 14-2 and is operatively coupled with external data systems 12-1, 12-2, i.e. a database system comprising: a plurality of computing device clusters, wherein a computing device cluster of the plurality of computing device clusters includes a plurality of computing devices.)
wherein a computing device of the plurality of computing devices includes a plurality of computing nodes, See FIG. 1A & Paragraph [0188], (FIG. 1A illustrates data intake and query system 16 which comprises worker nodes 14-1, 14-2, i.e. wherein a computing device of the plurality of computing devices includes a plurality of computing nodes.)
wherein a computing node of the plurality of computing nodes includes a plurality of processing core resources and a scheduler, See FIG. 2 & Paragraph [1313], (Search head 210 manages execution or compute resource availability for one or more portions of a submitted query being executed by the system. FIG. 2 data intake and query system 108 comprises a search head 210 operatively coupled with a plurality of indexers, forwarders, and data stores, i.e. wherein a computing node of the plurality of computing nodes (e.g. the data intake and query system comprises search head 210 and a plurality of execution resources such as worker nodes).) includes a plurality of processing core resources (e.g. worker nodes/indexer components are used to execute query tasks assigned by search head 210) and a scheduler (e.g. Note [1316] wherein search head 210 is described as being configured to schedule queries/portions of queries for execution).)
wherein the scheduler is operable to: identify a plurality of queries assigned to a first processing core resource of the plurality of processing core resources for concurrent execution; See Paragraph [0735], (Query coordinator 3304 may generate tasks and instructions for particular nodes to be assigned to a particular search process. As new queries are received, the system can determine a query-resource allocation or amount of execution resources required to execute the query.) See Paragraph [1282], (Queries are scheduled based on a determined query-resource allocation and the availability of execution resources, i.e. wherein the scheduler is operable to: identify a plurality of queries assigned to a first processing core resource of the plurality of processing core resources (e.g. as new queries are received, the system may allocate execution resources such as computing nodes to execute the queries) for concurrent execution (e.g. Note [1343] wherein a query and/or parts of a query may be performed concurrently).)
for each of the plurality of queries, identifying a set of operators to produce a plurality of operators for execution by the first processing core resource, wherein the set of operators includes one or more operators; See Paragraph [0344], (Search head 210 may receive a search query which is then analyzed to determine what portions of the query can be delegated to indexers and what portions of the query can be executed locally.) See Paragraph [1126], (Query coordinator 3304 may optimize scheduling or assignment of a query by assigning a worker node or set of worker nodes to execute portions of queries, i.e. for each of the plurality of queries (e.g. each query provided to the system is processed via the method of FIG. 6A), identifying a set of operators to produce a plurality of operators for execution by the first processing core resource, wherein the set of operators includes one or more operators (e.g. a query may comprise a plurality of executable portions whose execution is managed by components such as the search head and query coordinator);
prioritizing execution of the plurality of operators based on priority values corresponding to a query priority See Paragraph [1236], (Query priority level may be determined based on a number of records ingested by one or more worker nodes from one or more indexers, an indication by a user entering the query, a user identifier, time of day, etc.) See Paragraph [1258], (Search head 210 can allocate compute resources based on a priority level or prioritization factor of the query. A priority level may be based on one or more factors such as a user identifier, query size, user indication, etc. Different amounts of resources may be allocated to process tasks with a particular size category depending on the priority level, i.e. prioritizing execution of the plurality of operators based on query priority (e.g. query priority levels are determined based on one more factors such as those of [1258]) and operator priority (e.g. priority levels are assigned based on a plurality of factors including a task size) of the plurality of operators to produce a schedule of operator execution (e.g. Note [1313] wherein the system may schedule queries for execution according to resource availability and priority levels associated with individual queries).)
The examiner notes that Bhattacharjee does not explicitly disclose a priority metric for individual operators.
and enable execution of the plurality of operators by the first processing core resource in accordance with the schedule of operator execution. See Paragraph [1126] Query coordinator 3304 may optimize scheduling or assignment of a query by assigning a worker node or set of worker nodes to execute a portion of a query depending on the execution objectives, which may include reducing execution time, reducing bandwidth, etc., i.e. enable execution of the plurality of operators by the first processing core resource in accordance with the schedule of operator execution.)
Bhattacharjee does not disclose the step of prioritizing execution of the plurality of operators based on priority values corresponding to a[n] operator priority of the plurality of operators to produce a schedule of operator execution;
Chandramouli discloses the step of prioritizing execution of the plurality of operators based on priority values corresponding to a[n] operator priority of the plurality of operators to produce a schedule of operator execution; See Paragraph [0102], (Disclosing a system for providing a query optimizer configured to minimize latency during query execution. FIG. 1 illustrates a query optimizer component 100 capable of scheduling events for execution by corresponding operators on particular nodes of a datacenter. A scheduling policy is applied to the processing of events based on operator batching and operator priority.) See Paragraph [0161, (The system achieves stimulus time scheduling using priority queues ordered by stimulus time. Note [0007] wherein in a distributed DSMS, query operators may be distributed amongst available nodes embodied as individual computing machines such as server computers. Operators are defined as elements of a physical plan for executing queries and may include windowing operators, aggregation, join, projects, user-defined operators, etc. connected by queues of events, i.e. prioritizing execution of the plurality of operators based on priority values corresponding to a[n] operator priority of the plurality of operators to produce a schedule of operator execution.)
Bhattacharjee and Chandramouli are analogous art because they are in the same field of endeavor, (FIELD). It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Bhattacharjee to include the method of scheduling query operators for execution across a plurality of computing nodes as disclosed by Chandramouli. Paragraph [0120] of Chandramouli discloses that the method of performing stimulus time scheduling allows events to be processed according to priority which represents an improved scheduling policy over conventional round-robin-based approaches typically used in may conventional data stream management systems.
Regarding dependent claim 2,
As discussed above with claim 1, Bhattacharjee-Chandramoulli discloses all of the limitations.
Chandramoulli further discloses the step wherein the scheduler is further operable to: for each of the plurality of queries, identifying a second set of operators to produce a second plurality of operators for execution by a second processing core resource of the plurality of processing core resources, wherein the second set of operators includes one or more operators. See Paragraph [0227], (The system may either manually or automatically select a physical plan by iterating through a set of equivalent plans to select a plan having a lowest Maximum Accumulated Overload (MAO). The selected physical plan is optimized by determining operator placement that results in the lowest MAO, i.e. for each of the plurality of queries, identifying a second set of operators to produce a second plurality of operators for execution by a second processing core resource of the plurality of processing core resources (e.g. optimizing a physical plan includes determining an assignment of resources used to execute a query. As described in [0039], a physical plan reflects an operator placement to nodes/machines in a cluster of nodes), wherein the second set of operators includes one or more operators (e.g. a physical plan represents a query comprising operators).)
prioritizing execution of the second plurality of operators based on priority values corresponding to the query priority and second operator priority of the second plurality of operators to produce a second schedule of operator execution; See Paragraphs [0118]-[0119], (Stimulus time scheduling used for operator scheduling may assign priorities to corresponding operators to influence the execution order regardless of actual stimulus times associated with the corresponding events. The query optimizer may prioritize particular queries which cause the schedulers to make exceptions to strict stimulus scheduling, i.e. prioritizing execution of the second plurality of operators based on priority values corresponding to the query priority and second operator priority of the second plurality of operators.) See Paragraph [0227], (The system may either manually or automatically select a physical plan by iterating through a set of equivalent plans to select a plan having a lowest Maximum Accumulated Overload (MAO). The selected physical plan is optimized by determining operator placement that results in the lowest MAO, i.e. to produce a second schedule of operator execution.)
enable execution of the second plurality of operators by the second processing core resource in accordance with the second schedule of operator execution. See Paragraph [0039], (For a given physical plan, operator placement reflects an assignment of operators to nodes/machines in a cluster of nodes. Paragraph [0227] describes optimizing a physical plan representing an initial assignment of operators to generate an optimized physical plan having improved operator placements, i.e. enable execution of the second plurality of operators by the second processing core resource in accordance with the second schedule of operator execution (e.g. optimizing a physical plan reflects a change in processing resources used to execute the individual operators).)
Regarding dependent claim 4,
As discussed above with claim 1, Bhattacharjee-Chandramouli discloses all of the limitations.
Bhattacharjee further discloses the step wherein the query priority comprises one or more of: a static classification assigned to the query by a query planner; an estimated computational cost associated with execution of the query; a duration of time the query has remained in a pending state; a user class or role associated with the query; a service level indicator representing a target response time for the query; a data locality metric indicating proximity between query-referenced data and the assigned processing core resource; and an execution history value reflecting previous scheduling attempts or retry outcomes for the query. See Paragraph [1258], (Search head 210 may allocate compute resources based on a priority level or prioritization factor of a query wherein the priority level may be based on one or more factors such as a user identifier, query size, user indication, etc., i.e. wherein the query priority comprises one or more of: a static classification assigned to the query by a query planner (e.g. a user may enter a query priority metric which is used to determine a priority level or prioritization factor);)
Regarding dependent claim 5,
As discussed above with claim 1, Bhattacharjee-Chandramouli discloses all of the limitations.
Chandramouli further discloses the step wherein the operator priority comprises one or more of: a position value assigned to the operator based on its location within a query operator execution flow; an input readiness indicator based on the presence of one or more queued input data blocks for the operator; a blocking condition status indicating whether prerequisite inputs for the operator are satisfied; a static weight assigned to the operator by a query planner; a historical execution latency value associated with prior executions of the operator; a resource usage estimate for the operator based on expected memory or processing demand; and a retry count or failure indicator associated with prior attempts to execute the operator. See Paragraph [0161], (Stimulus time scheduling utilizes priority queues ordered by stimulus time from oldest “t” to first to implement event queues. Note [0100] wherein a stimulus time represents the wall-clock time of an operator’s arrival at a source operator from outside, i.e. an input readiness indicator based on the presence of one or more queued input data blocks for the operator;)
Regarding dependent claim 7,
As discussed above with claim 1, Bhattacharjee-Chandramouli discloses all of the limitations.
Bhattacharjee further discloses the step wherein the scheduler is further operable to: transmit one or more operators from the schedule of operator execution directly to the first processing core resource for execution. See Paragraph [0489], (The query coordinator may provide worker nodes with instructions to obtain relevant datasets from a plurality of dataset sources, process the datasets, aggregate the partial results and communicate said aggregated results to the query coordinator, i.e. wherein the scheduler is further operable to: transmit one or more operators from the schedule of operator execution directly to the first processing core resource for execution (e.g. search head and query coordinators are used to distribute query portions to worker nodes for execution).)
Regarding independent claim 9,
The claim is analogous to the subject matter of independent claim 1 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 10,
The claim is analogous to the subject matter of dependent claim 2 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 12,
The claim is analogous to the subject matter of independent claim 4 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 13,
The claim is analogous to the subject matter of dependent claim 5 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 15,
The claim is analogous to the subject matter of independent claim 7 directed to a computer system and is rejected under similar rationale.
Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharjee in view of Chandramoulli as applied to claim 1 above, and further in view of Song et al. (US PGPUB No. 2019/0370800; Pub. Date: Dec. 5, 2019) and Li et al. (US PGPUB No. 2015/0169684; Pub. Date: Jun. 18, 2015).
Regarding dependent claim 3,
As discussed above with claim 1, Bhattacharjee-Chandramouli discloses all of the limitations.
Bhattacharjee-Chandramouli does not disclose the computing node including a second scheduler, wherein the second scheduler is operable to:identify the plurality of queries assigned to a second processing core resource of the plurality of processing core resources for concurrent execution;
Song discloses a computing node including a second scheduler, See FIG. 10 & Paragraph [0178], (Disclosing a method for aggregating data from a plurality of sources in response to receiving a request comprising aggregation of data of interest. FIG. 10 illustrates a system comprising a distributed scheduler 1002f which may be implemented by another system, another device, another group of systems, or another group of devices, separate from or including transaction service provider system 102, such as issuer system 104 (e.g., one or more devices of issuer system 104), customer device 106, merchant system 108, i.e. the computing node including a second scheduler (e.g. the distributed system comprises a plurality of systems as in FIG. 1, wherein the scheduling component may be present in a plurality of devices in communication with the transaction service provider system 102).)
wherein the second scheduler is operable to: identify the plurality of queries assigned to a second processing core resource of the plurality of processing core resources for concurrent execution; See Paragraph [0133], (The distributed scheduler may schedule jobs on at least some of the plurality of data sources to aggregate data from said sources, which may be performed in parallel over the multiple data sources, i.e. wherein the second scheduler is operable to: identify the plurality of queries (e.g. the dynamic scheduler may schedule jobs to be executed with regard to a data source) assigned to a second processing core resource of the plurality of processing core resources for concurrent execution (e.g. Note [0144] wherein the system allocates computing resources and network resources to execute the parallel aggregation tasks indicated by the distributed scheduler).)
Bhattacharjee, Chandramouli and Song are analogous art because they are in the same field of endeavor, query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Bhattacharjee-Chandramouli to include the distributed scheduler as disclosed by Song. Paragraph [0144] of Song discloses that the system improves allocation of computing resources and network resources improves storage schema, improves parallelization and provides low latency and high throughput without overburdening computing resources.
Bhattacharjee-Chandramouli-Song does not disclose the step wherein for each of the plurality of queries, identifying a second set of operators to produce a second plurality of operators for execution by the second processing core resource, wherein the second set of operators includes one or more operators;
prioritizing execution of the second plurality of operators based on the query priority and a second operator priority of the second plurality of operators to produce a second schedule of operator execution;
enable execution of the second plurality of operators by the second processing core resource in accordance with the second schedule of operator execution.
Li discloses the step wherein for each of the plurality of queries, identifying a second set of operators to produce a second plurality of operators for execution by the second processing core resource, wherein the second set of operators includes one or more operators; See Paragraph [0014], (Incoming queries may be divided into sub-queries that are each assigned a sub-priority. The sub-priority may be dynamically updated in view of query-level resource consumption. The dynamic scheduler may leverage the dynamic updating of sub-priorities in order to favor the sub-queries of light queries over the sub-queries of heavy queries for execution, thereby increasing the likelihood that light queries will return results quickly, i.e. wherein the scheduler is further operable to: for each of the plurality of queries, (e.g. the system may process multiple queries simultaneously such as heavy and light queries) identifying a second set of operators to produce a second plurality of operators for execution by a second processing core resource of the plurality of processing core resources, wherein the second set of operators includes one or more operators (e.g. a query may be initially subdivided into sub-queries with a first sub-priority, which may then be updated to a second sub-priority for the sub-queries based on query metrics of the plurlaity of queries. For example, the sub-priorities of a light query may be updated to favor faster execution of the sub-queries of a heavy query)
prioritizing execution of the second plurality of operators based on the query priority and a second operator priority of the second plurality of operators to produce a second schedule of operator execution; See Paragraph [0014], (The dynamic scheduler may favor the sub-queries of light queries over the sub-queries of heavy queries for execution by dynamically updating sub-priorities in view of query-level resource consumption, i.e. prioritizing execution of the second plurality of operators based on the query priority and second operator priority of the second plurality of operators to produce a second schedule of operator execution;)
enable execution of the second plurality of operators by the second processing core resource in accordance with the second schedule of operator execution. See FIG. 3 & Paragraph [0031], (FIG. 3 illustrates method 300 comprising step 312 of passing the selected sub-query having the highest sub-priority to the CPU for execution. Note [0019] wherein the sub-priority reflects a resource consumption metric of the sub-query's parent query, i.e. enable execution of the second plurality of operators (e.g. the sub-queries of a light query having an updated sub-priority) by the second processing core resource in accordance with the second schedule of operator execution (e.g. the sub-queries are executed based on the updated sub-priorities by the dynamic scheduler, wherein the query is assigned the resources indicated by the query-level resource consumption metrics, i.e. a light query may require a different amount of available resources than a heavy query).).
Bhattacharjee, Chandramouli, Song and Li are analogous art because they are in the same field of endeavor, query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Bhattacharjee-Chandramouli-Song to include the method of determining sub-priorities for individual subqueries that comprise a larger input query as disclosed by Li. Paragraph [0013] of Li discloses that the use of a dynamic scheduler can ensure that time-sensitive queries are prioritized over less time-sensitive queries, which represents an improvement in system responsiveness.
Regarding dependent claim 11,
The claim is analogous to the subject matter of dependent claim 3 directed to a computer system and is rejected under similar rationale.
Claim(s) 6, 8, 14 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharjee in view of Chandramoulli as applied to claim 1 above, and further in view of Li et al. (US PGPUB No. 2015/0169684; Pub. Date: Jun. 18, 2015).
Regarding dependent claim 6,
As discussed above with claim 1, Bhattacharjee-Chandramouli discloses all of the limitations.
Bhattacharjee-Chandramouli does not disclose the step wherein the scheduler is further operable to: prioritize execution of the plurality of operators by generating a composite priority value for the plurality of operators, the composite priority value being based on priority values associated with the query and operator a query priority associated with a corresponding query and an operator priority associated with a corresponding operator, wherein the schedule of operator execution is determined based on the composite priority value.
Li discloses the step wherein the scheduler is further operable to: prioritize execution of the plurality of operators by generating a composite priority value for the plurality of operators, the composite priority value being based on a query priority associated with a corresponding query and an operator priority associated with a corresponding operator, wherein the schedule of operator execution is determined based on the composite priority value. See Paragraph [0011], (Disclosing a method for scheduling query execution. A query may be received and assigned a priority for execution. The input query may then be divided into a plurality of sub-queries which may also be assigned a sub-priority based on a resource consumption metric of the query. A dynamic scheduler may assign a priority to an incoming query and a sub-priority to the one or more sub-queries.) See Paragraph [0014], (Sub-priorities for the plurality of subqueries may be updated in view of query-level resource consumption, i.e. wherein the scheduler (e.g. the dynamic scheduler of [0111]) is further operable to: prioritize execution of the plurality of operators by generating a composite priority value for the plurality of operators (e.g. the scheduler assigns a query-level priority metric), the composite priority value being based on a query priority associated with a corresponding query and an operator priority associated with a corresponding operator, wherein the schedule of operator execution is determined based on the composite priority value (e.g. sub-priority values are generated and updated based on query-level metrics).)
Bhattacharjee, Chandramouli and Li are analogous art because they are in the same field of endeavor, concurrent query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Bhattacharjee-Chandramouli to include the method of determining sub-priorities for individual subqueries that comprise a larger input query as disclosed by Li. Paragraph [0013] of Li discloses that the use of a dynamic scheduler can ensure that time-sensitive queries are prioritized over less time-sensitive queries, which represents an improvement in system responsiveness.
Regarding dependent claim 8,
As discussed above with claim 1, Bhattacharjee-Chandramouli discloses all of the limitations.
Bhattacharjee-Chandramouli does not disclose the step wherein the scheduler is further operable to: transmit one or more operators from the schedule of operator execution to an operator execution module operably coupled to the first processing core resource, wherein the operator execution module delivers the one or more operators to the first processing core resource for execution.
Li discloses the step wherein the scheduler is further operable to: transmit one or more operators from the schedule of operator execution to an operator execution module operably coupled to the first processing core resource, wherein the operator execution module delivers the one or more operators to the first processing core resource for execution. See FIG. 3 & Paragraph [0028], (The method of FIG. 3 comprises steps 302, 304 wherein the dynamic scheduler may assign the highest priority sub-query to an available computing resource of server 102, i.e. transmit one or more operators from the schedule of operator execution (e.g. the sub-query pool is used to organize sub-queries based on priority levels) to an operator execution module operably coupled to the first processing core resource, wherein the operator execution module delivers the one or more operators to the first processing core resource for execution (e.g. the dynamic scheduler may perform resource allocation tasks as in [0028] to allocate a computing resource to a sub-query).)
Bhattacharjee, Chandramouli and Li are analogous art because they are in the same field of endeavor, concurrent query processing. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Bhattacharjee-Chandramouli to include the method of determining sub-priorities for individual subqueries that comprise a larger input query as disclosed by Li. Paragraph [0013] of Li discloses that the use of a dynamic scheduler can ensure that time-sensitive queries are prioritized over less time-sensitive queries, which represents an improvement in system responsiveness.
Regarding dependent claim 14,
The claim is analogous to the subject matter of dependent claim 6 directed to a computer system and is rejected under similar rationale.
Regarding dependent claim 16,
The claim is analogous to the subject matter of dependent claim 8 directed to a computer system and is rejected under similar rationale.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-2, 6, 9-11 and 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s amendments modified the scope of the claims and therefore necessitated the new grounds of rejection presented in this Office Action.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FMMV/Examiner, Art Unit 2159 /ANN J LO/Supervisory Patent Examiner, Art Unit 2159