Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,373

CACHE STRUCTURE FOR HIGH PERFORMANCE HARDWARE BASED PROCESSOR

Final Rejection §101§103
Filed
Dec 07, 2023
Examiner
TALUKDAR, ARVIND
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Hyannis Port Research Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
84%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
449 granted / 557 resolved
+25.6% vs TC avg
Minimal +4% lift
Without
With
+3.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
36 currently pending
Career history
593
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-27 were restricted. Claims 1-18 are elected. Claims 19-27 are withdrawn. Claims 1, 5-7, 13-16, 18, 30 are amended. Claims 1-18, 28-30 are pending. Priority: 9/13/2023(FP) Assignee: Samsung Claim objections 1.Amended Claim 1 is objected for reciting a limitation that lacks clarity. Claim 1 recites, ‘a data processor configured to perform one or more financial market functions’. The spec does not recite the amended phrase, ‘financial market functions’. The spec does not clearly define ‘market functions’(Para-0009) or ‘financial functions’(Para-0022). And there is no written description support to show that the terms can be combined to recite ‘financial market functions’. The MPEP advises that while the spec provides important context and definitions, it is crucial to avoid importing limitations from the spec into the claims that are not explicitly stated in the claim language. Therefore it is unclear what ‘a data processor configured to perform one or more financial market functions’ means. That said, it is well known in the prior art that a ‘function’ takes inputs, processes them, and returns outputs, .i.e. a function recites a set of steps/tasks that connects the inputs of the function to the outputs of the function. ‘Perform’ refers to the action of doing something. But as recited, it is unclear what input(s) ‘financial market functions’ take, and what steps they perform to generate output(s). Accordingly, the role of the data processor in ‘performing one or more financial market functions’ is unclear. In summary, the scope of independent, stand-alone claim 1 when it recites ‘a data processor configured to perform one or more financial market functions’, is unclear. Note: Since the previous claim objection regarding claim language is not resolved in the amendment and arguments, the objection has been clarified and maintained. 112(b) Unclear relationship between claim elements 1.Amended Claim 1 is rejected for reciting a limitation that is unclear, ambiguous, incorrect and indefinite. Amended Claim 1 recites, ‘one or more memories, accessed by the data processor….’. As per the MPEP, independent claim 1 is a stand-alone claim that must contain all the elements and limitations necessary to define the disclosure. Accordingly it is unclear what ‘one or more memories’ represent. It is unclear how many types of memories ‘one or more memories’ refer to, and their respective locations. As recited, ‘one or more memories’ can consist of only external memory or only internal memory, thereby making the recitation ambiguous. Fig. 2, spec Para-0072 recites, ‘the cache manager 225 is implemented in fixed logic such as an FPGA, the cache 160 may include a small but relatively fast cache memory implemented in internal FPGA Block memory (termed “Block RAM LO Cache” 230 herein), along with one or more larger but relatively slower external memories’. Given the spec recitation w.r.t. number, type and location, reciting ‘one or more memories’ does not ensure that the scope of independent claim 1 is consistent with the original disclosure. Accordingly claim 1 is rejected for reciting a limitation that is unclear, ambiguous, incorrect and indefinite. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 17-18, 28-30 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Kodde (20150095613) in view of Makhija (20230393784). As per Claim 1, Kodde discloses an electronic data processing system (Kodde, [0021 - Fig. 1 shows a market data processing system 100 for acquiring and processing market data delivered from one or more data sources such as financial exchanges, in which an asset management device 10 may be used]) comprising: a data processor (Kodde, [0124 - Fig. 7 shows a Market Data Processing system 100, in an FPGA on a PCIe Board 70 inside a standard server 71]) configured to perform one or more financial market functions (Kodde, [0006 - Market data processing systems using FPGAs comprise an order management device for storing details related to each financial order identified in the input commands in a data structure. The order management device manages access to this data structure by adding an order to the data structure if an add command identifying an order is received, or delete an order from the data structure if a delete command is received]; [See objection]); one or more memories (Kodde, [Fig. 7: off-chip DDR RAM/SDRAM]; [See 112(b)]), accessed by the data processor to configure data stored therein as one or more tiles, with each tile (Kodde, [Fig. 2]; [Fig. 6: Keys memory 102, Data Memory 103]; [Fig. 2: chained orders for the given ISP]) further comprising: an array of access nodes (Kodde, [Fig. 6: Keys memory 102]; [0029 – In Fig. 2, a hash table associates each key/Order ID an address, computed using a hash function. This address is then used to retrieve the value/complete order in memory; In Fig. 2, the addresses computed by the hash function represents the array of access nodes]) that represent open orders (Kodde, [0025 - The limits aggregation and book building device 4 takes each individual order of the same book and side and matches them by price, adding their quantity; This implies that an unmatched order is an open order]) for a given instrument, side, and price (Kodde, [Fig. 6: Data Memory 103]; [0049 - The information related to each order is maintained in Data Memory 103. The information maintained in the data memory 103 may comprise the instrument, the side .i.e. bid/sell, the price, and the quantity of the order]; [0029 – ‘Collisions’ occur when the hash function generates the same address for more than one Order ID, thereby implying orders with the same given ISP. To handle the collisions, chain/group several orders in each hash table entry; Since each node represents one order, it implies that the chained orders/nodes form a tile for that given ISP; Since the claim does not recite how the ‘grouping’ is done, the citation is a valid interpretation]); one or more head and/or tail references (Kodde, [Fig. 2: index 0-address 007563/head reference, index 6-null/tail; Note: address is equivalent to reference]) that organize the array of access nodes into one or more collections (Kodde, [Fig. 2: LinkedList1 at index 0, List2 at index2, List3 at indexes 4-5; A linked list is a collection]) of active and/or free access nodes (Kodde, [Fig. 2: Indexes 0, 2, 4-5 are active, indexes 1,3,6 are free]; [0112 – In Fig. 6, each line in memory 102 or 103 represents a word of data, thereby implying that occupied lines are active nodes and empty lines are free nodes]). Makhija clarifies the prioritized collections as follows, one or more head and/or tail references (Makhija, [0062 – In Fig. 4, each queue/linked list 440 can have the head 442 and the tail 444]) that organize the array of access nodes (Makhija, [Fig. 4: Head 442A….Head 442L; The head address of each queue can represent the index/address of the array of access nodes]) into one or more prioritized collections of active and/or free access nodes (Makhija, [0061 – In Fig. 4, the queues/linked lists are differentiated by their respective priorities, such as queue 440A has the highest priority level, and queue 440L has the lowest priority level]; [0087 – In Fig. 7, processing device 1002 is a FPGA]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the prioritized linked lists of Makhija into the the market data processing architecture of Kodde for the benefit of having the data path sequencer/FPGA traverse the memory access command queues or linked lists in the order of their priorities starting with the highest priority queue. In each queue, the sequencer can process the memory access commands starting from the head of the queue (Makhija, 0023). As per Claim 2, the rejection of claim 1 is incorporated, and Kodde discloses, wherein the access nodes further contain references to cell data structures (Kodde, [Fig. 6: Data Memory 103]) that contain additional data that represent the orders (Kodde, [0029 – In Fig. 2, a hash table associates each key/Order ID an address/reference, computed using a hash function. This address is then used to retrieve the value/complete order in memory]). As per Claim 3, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose, wherein the one or more prioritized collections (Makhija, [0021 - Each queue/linked list is associated with a respective priority level]) are sorted based on one or more attributes of the access nodes in the array of access nodes (Makhija, [0023 - The data path sequencer traverses the memory access command queues/linked lists in the order of their priorities starting with the highest priority queue]), the one or more attributes including at least a sequence, a time received, or a quantity (Makhija, [0061 - The scheduling and ordering of memory access commands is implemented by the central processing unit 410/FPGA using one or more memory access command queues 440A-440L, each of which is represented by a memory buffer that stores a sequence of memory access commands]; [0087 – In Fig. 7, processing device 1002 can be a FPGA]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the prioritized linked lists of Makhija into the the market data processing architecture of Kodde for the benefit of having the data path sequencer/FPGA traverse the memory access command queues or linked lists in the order of their priorities starting with the highest priority queue. In each queue, the sequencer can process the memory access commands starting from the head of the queue (Makhija, 0023). As per Claim 17, the rejection of claim 1 is incorporated, and Kodde discloses, wherein the processor is implemented using fixed logic (Kodde, [0004 - Logic blocks can be configured to perform complex combinational functions, or merely simple basic logical operations like boolean AND, OR, NAND, XOR etc.]), and the fixed logic comprises any of a field programmable gate array (FPGA) and application specific integrated circuit (ASIC) or other embedded hardware technologies (Kodde, [0004 - FPGA is an integrated circuit which can be configured after manufacturing. The configuration is generally specified using a hardware description language/HDL. FPGAs contain a huge number of programmable logic components/logic blocks, and a hierarchy of reconfigurable interconnections that allow the blocks to be ‘wired together’]). As per Claim 18, Kodde discloses a computer program product in a non-transitory computer-readable medium (Kodde, [Fig. 7: off-chip DDR RAM/SDRAM]) for use in a data processing system (Kodde, [0021 - Fig. 1 shows a market data processing system 100 for acquiring and processing market data delivered from one or more data sources such as financial exchanges, in which an asset management device 10 may be used]) for executing a market function (Kodde, [0006 - Market data processing systems using FPGAs comprise an order management device for storing details related to each financial order identified in the input commands in a data structure. The order management device manages access to this data structure by adding an order to the data structure if an add command identifying an order is received, or delete an order from the data structure if a delete command is received]), the computer program product comprising: first instructions for receiving access nodes (Kodde, [0056 – As per Fig. 3, a first Address Generation Core 101 that receives the normalized commands 100 for generating keys memory reads commands in order to read Key information from the key memory 102 and compute key addresses from the read keys; Since the claim does not define ‘receiving access nodes’, the citation is a valid interpretation]; [0052 - The order management device 10 comprises a Keys Memory 102 for storing information related to the keys associated with the order identifiers]), each access node referencing a cell data structure representing order data associated with an instrument, side and price (Kodde, [Fig. 6: Data Memory 103]; [0029 – In Fig. 2, a hash table associates each key/Order ID an address, computed using a hash function. This address is then used to retrieve the value/complete order in memory; In Fig. 2, the addresses computed by the hash function represents the array of access nodes]); second instructions for inserting the access nodes into one or more tiles (Kodde, [0068 - The entry allocation core 104 allocate a new entry in the keys memory 102 to an order ID in response to add/insert commands]), each of the one (Kodde, [Fig. 2]; [Fig. 6: Keys memory 102, Data Memory 103]; [0029 - Chained orders/nodes form a tile for the given ISP]) or more tiles comprising: an array of the access nodes (Kodde, [Fig. 6: Keys memory 102]; [0029 – In Fig. 2, a hash table associates each key/Order ID an address, computed using a hash function. This address is then used to retrieve the value/complete order in memory; In Fig. 2, the addresses computed by the hash function represents the array of access nodes]) for a given instrument, side, and price (Kodde, [Fig. 6: Data Memory 103]; [0049 - The information related to each order is maintained in Data Memory 103. The information maintained in the data memory 103 may comprise the instrument, the side .i.e. bid/sell, the price, and the quantity of the order]; [0029 -‘Collisions’ occur when the hash function generates the same address for more than one Order ID, thereby implying orders with the same given ISP. To handle the collisions, chain/group several orders in each hash table entry; Since each node represents one order, it implies that the chained orders/nodes form a tile for that given ISP]); metadata fields that relate to additional order data for the given instrument, side, and price (Kodde, [0049,0050 - The information related to each order is maintained in Data Memory 103. The information maintained in the data memory 103 comprises the instrument, the side, i.e. bid/sell, the price, and the quantity of the order]; [0025 - The size or quantity of an order designates the number of shares to be bought or sold]; [0024 - An aggregated limit can also have an ‘order count’ property reflecting the number of orders that have been aggregated in this limit]); at least a head and a tail reference (Kodde, [Fig. 2: index 0-address 007563/head reference, index 6-null/tail]) organizing the access nodes in the array into one or more collections (Kodde, [Fig. 2: LinkedList1 at index 0, List2 at index2, List3 at indexes 4-5; A linked list is a collection]); and third instructions for processing the array of access nodes to execute the market function (Kodde, [0056 - An execution core 107 for executing each received command i.e. order add/insert command, based on the entry address identified in the input command, the type of the input command forwarded from the previous cores on its input interface, and the data associated with the considered order received from memory 103]). Makhija clarifies the prioritized collections as follows, at least a head and a tail reference (Makhija, [0062 – In Fig. 4, each queue/linked list 440 can have the head 442 and the tail 444]) that organize the array of access nodes (Makhija, [Fig. 4: Head 442A….Head 442L; The head address of each queue can represent the index of the array of access nodes]) into one or more prioritized collections (Makhija, [0061 – In Fig. 4, the queues/linked lists are differentiated by their respective priorities, such as queue 440A has the highest priority level, and queue 440L has the lowest priority level]; [0087 – In Fig. 7, processing device 1002 can be a FPGA]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the prioritized linked lists of Makhija into the the market data processing architecture of Kodde for the benefit of having the data path sequencer/FPGA traverse the memory access command queues or linked lists in the order of their priorities starting with the highest priority queue. In each queue, the sequencer can process the memory access commands starting from the head of the queue (Makhija, 0023). As per Claim 28, the rejection of claim 1 is incorporated, and Kodde discloses, wherein the tiles further comprise: metadata fields that relate to additional order data for the given instrument, side and price (Kodde, [0049,0050 - The information related to each order is maintained in Data Memory 103. The information maintained in the data memory 103 comprises the instrument, the side, i.e. bid/sell, the price, and the quantity of the order]; [0025 - The size or quantity of an order designates the number of shares to be bought or sold]; [0024 - An aggregated limit can also have an ‘order count’ property reflecting the number of orders that have been aggregated in this limit]). As per Claim 29, the rejection of claim 28 is incorporated, and Kodde discloses, wherein the metadata fields represent an aggregate number of shares (Kodde, [0025 - The size or quantity of an order designates the number of shares to be bought or sold]) in the tile and a number of open orders in the tile (Kodde, [0025 - The limits aggregation and book building device 4 takes each individual order of the same book and side, bid or ask, and matches them by price, adding their quantity; Since claim does not define ‘open order’, the citation is a valid interpretation]; [0024 - An aggregated limit can also have an ‘order count’ property reflecting the number of orders that have been aggregated in this limit]). As per Claim 30, the rejection of claim 1 is incorporated, and Kodde discloses, wherein the financial market functions performed by the data processor (Kodde, [0005 - Market data processing systems are designed using FPGAs]; [0014 - Fig. 1 shows market data processing architecture]) comprise: receiving a request for one of the access nodes that represents the given instrument, side and price (ISP) (Kodde, [0093 - In Fig. 5, step 500, a command related to Order ID is received comprising an order identifier and a set of order information; Here Order ID represents the ISP. Since the claim does not recite the format of the request, the citation is a valid interpretation]); locating the requested access node from the tile for the given ISP (Kodde, [0097 – In Fig. 5, step 501, one or more addresses are computed by hashing the Order ID using a FPGA multiplier. After steps 502, 503, in step 504, the keys in couples {Key, Presence Bit}, in the read data, are compared to the Order ID in the input command]); returning the requested access node from the tile for the given ISP (Kodde, [0099 – In Fig. 5, step 509, if a match is found with a presence bit equal to 1, .i.e. step 505, and if the input command is not an ADD command, .i.e. step 506, address/access node and position at which the key has been found are transmitted/returned to the Data address generation core 106, in step 509; Since the claim does not recite to what component the requested access node for the tile for the ISP is ‘returned’ to, the citation is a valid interpretation]). Claims 6-9 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Kodde (20150095613) in view of Makhija (20230393784) and Sukhwani et al (‘Database Analytics: A Reconfigurable-Computing Approach’, 2014, IEEE, Pgs. 19-29). As per Claim 6, the rejection of claim 1 is incorporated, and Kodde discloses, the one or more memories ([See 112(b)]) include on-chip Block Random Access Memory (Block RAM) located on a semiconductor chip with the processor (Kodde, [0033 -internal FPGA memory]); the one or more memories ([See 112(b)]) also includes off-chip Dynamic Random Access Memory (DRAM) that is not located on the semiconductor chip with the processor (Kodde, [Fig. 7: off-chip DDR RAM/SDRAM]); Sukhwani further discloses, the one or more memories include on-chip Block Random Access Memory (Block RAM) located on a semiconductor chip with the processor (Sukhwani, [Pg. 21, Col. 2, Para-1 - FPGA block BRAM/on-chip is limited]; [Pg. 24, Col. 1, Para-3 – Fig. 4 shows the two phases of FPGA hash-join. The join columns and the row addresses are stored in the address table in the FPGA BRAM]; [Fig. 5]); the one or more memories also include off-chip Dynamic Random Access Memory (DRAM) that is not located on the semiconductor chip with the processor (Sukhwani, [Fig. 2 shows an off-chip DRAM]; [Pg. 21, Col. 2, Para-1 - The full records are stored in on-card/offchip DRAM]; [Pg. 24, Col. 1, Para-3 – In Fig. 4, the full rows are stored in off-chip DRAM]); and two or more tiles are stored contiguously such that they are accessible by a single address parameter (Sukhwani, [Pg. 24, Col. 1, Para-3 – In Fig. 4, rows hashing to the same position are chained in the address table]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the hardware acceleration of Sukhwani into the the market data processing architecture of Kodde, Makhija for the benefit of using a hardware acceleration approach to offload and accelerate the most CPU-intensive operations in analytics queries on an FPGA. The FPGA operates on a database management system’s in-memory data, which is the most up-todate copy of the data, for real-time analytics alongside OLTP/online transaction processing which includes market data processing (Sukhwani, Pg. 20, Col. 1, Para-2). As per Claim 7, the rejection of claim 6 is incorporated, and Kodde discloses, wherein a source address (Kodde, [Abstract - receiving an input command for an asset comprising an asset identifier and asset information; It is well-known that the input command may include a source address]) and destination address for the processor to access the one or more memories ([See 112(b)]) include one or more of an off-chip DRAM address or an on-chip Block RAM address (Kodde, [Fig. 5: step 510]; [Abstract - Computing a data address/DRAM/destination to the data memory for the asset from the address/source address and position in the keys memory at which an entry has been found or allocated for the asset]). As per Claim 8, the rejection of claim 1 is incorporated, and Kodde discloses, wherein the one or more tiles further comprise a predetermined number of access nodes (Kodde, [Fig. 6: Keys Memory 102]; [Fig. 3: entry allocation core 104]; [0070 - If the entry allocation core 104 tries to add an entry into the hash table and no slot is available, then the memory is full; This implies a predetermined number of access nodes for each tile]), such that a total number of access nodes for a given instrument, side and price extends beyond the predetermined number of access nodes (Kodde, [0029 - There are generally more possible IDs than there are available memory locations. With such data structures, ‘collisions’ often occur when the hash function generates the same address for more than one Order ID, thereby implying that the total number of access nodes extends/greater than the predetermined number of access nodes]; [0025 - Order books can comprise orders from the same instrument on different markets/consolidated books]). Sukhwani further discloses, and a selected tile (Sukhwani, [Fig. 2: Tile0]) contains a reference to another one (Sukhwani, [Fig. 2: Tile1]) of the tiles (Sukhwani, [Pg. 24, Col. 1, Para-3 - The join columns and the row addresses are stored in the address table in the FPGA BRAM]), Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the hardware acceleration of Sukhwani into the the market data processing architecture of Kodde, Makhija for the benefit of using a hardware acceleration approach to offload and accelerate the most CPU-intensive operations in analytics queries on an FPGA (Sukhwani, Pg. 20, Col. 1, Para-2). As per Claim 9, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose a FPGA-based market data processing system. Sukhwani further discloses, wherein at least one access node contained within a selected tile (Sukhwani, [Fig. 2: Tile0]) is accessible to the processor before an entire transfer of the selected tile is complete (Sukhwani, [Pg. 24, Col. 1, Para-3 – In Fig. 4, the full rows are stored in off-chip DRAM, wherea the join columns and the row addresses are stored in the address table in the FPGA BRAM, thereby implying that atleast one access node/address is accessible to the processor before an entire transfer of the selected tile is complete]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the hardware acceleration of Sukhwani into the the market data processing architecture of Kodde, Makhija for the benefit of using a hardware acceleration approach to offload and accelerate the most CPU-intensive operations in analytics queries on an FPGA (Sukhwani, Pg. 20, Col. 1, Para-2). Claims 4-5, 10-11, 14 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Kodde (20150095613) in view of Makhija (20230393784) and Nault (20010044762). As per Claim 4, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose, wherein the one or more prioritized collections are each implemented as a linked list (Makhija, [0062 – In Fig. 4, the memory access command queues 440 are linked lists]), with each linked list further comprising selected ones of the head and tail references (Makhija, [0062 - Each queue 440 has the head 442/reference and the tail 444/reference]), Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the prioritized linked lists of Makhija into the the market data processing architecture of Kodde for the benefit of having the data path sequencer/FPGA traverse the memory access command queues or linked lists in the order of their priorities starting with the highest priority queue. In each queue, the sequencer can process the memory access commands starting from the head of the queue (Makhija, 0023). Nault further discloses, and each access node in the array including a reference to a next access node or a previous access node in the linked list (Nault, [0065 – In Fig. 5A, the linked list 501 is distinct and contains the pointers NEXT 502 and PREVIOUS 503 and a pointer to the chart structure 504]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the doubly linked lists of Nault into the the market data processing architecture of Kodde, Makhija for the benefit of providing a set of accounting data, organizing the accounting data using doubly linked lists into a central memory of a computer and generating a financial statement (Nault, 0021). As per Claim 5, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose a FPGA-based market data processing system using prioritized collections of lists. Nault further discloses, wherein the one or more prioritized collections are each implemented as a doubly linked list (Nault, [0021 - Providing a set of accounting data, organizing said accounting data using doubly linked lists into a central memory of a computer and generating a financial statement]), with each doubly linked list comprising selected ones of the head and tail references (Nault, [0065 - The organization of the accounting trial balance data 100 in memory corresponds to a doubly linked data structure 500 which permits insertion, destruction and reordering of the accounts inside the list. As shown in Fig. 5a, a particularity of this organizational data is that the linked list 501 is distinct and contains the pointers NEXT 502 and PREVIOUS 503 and a pointer to the chart structure 504 which permits flexibility for manipulation]; [0073 - The pointers for the first element 517/head and the last element 518/tail of the list of pointers 501 as well as the pointer to the first element 519 of the LINK vector are stored in memory]), and each access node in the array including both a reference to a next access node and a reference to a previous access node (Nault, [0141 - The transaction structure 2201 is a doubly linked with the pointers NEXT 2202 and PREVIOUS 2203 inside of the structure]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the doubly linked lists of Nault into the the market data processing architecture of Kodde, Makhija for the benefit of providing a set of accounting data, organizing the accounting data using doubly linked lists into a central memory of a computer and generating a financial statement (Nault, 0021). As per Claim 10, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose a FPGA-based market data processing system. Nault further discloses, wherein the one or more tiles further comprise a free list maintained in order by index into the array of access nodes (Nault, [0066 - The insertion algorithm used enables the insertion in an empty/free list]; [0069 - When an account is deleted, the element in the list of pointers/array of access nodes containing the pointer/address to the chart structure 509/tile is taken out of the list of pointers by modifying the pointer NEXT 502 of the preceding element and the pointer PREVIOUS 503 of the NEXT element. The destruction algorithm used to remove an element from the doubly linked list is able to process information in which the list is empty/free list]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the doubly linked lists of Nault into the the market data processing architecture of Kodde, Makhija for the benefit of providing a set of accounting data, organizing the accounting data using doubly linked lists into a central memory of a computer and generating a financial statement (Nault, 0021). As per Claim 11, the rejection of claim 5 is incorporated, and Kodde, Makhija, Nault further disclose, wherein at least one of the tiles is configured to enable the processor to move an access node (Nault, [0071 - When an account is moved within the list, only the NEXT 502 pointer and the PREVIOUS 503 pointer of the elements concerned in the list of pointers/array of access nodes are modified, using, in a successive fashion, the algorithm of destruction and the algorithm of insertion]) between two of the prioritized collections by rewriting the reference to the previous access node and the reference to the next access node (Nault, [0074, 0075 – In Fig. 4, the data organization for financial statements corresponds to a doubly linked data structure 520 permitting insertion, destruction of lines of the financial statement inside of the list. As shown in Fig. 5b, a particularity of this organization is that the linked list 521 is distinct and contains pointers NEXT 522 and PREVIOUS 523 and a pointer to the financial statement structure 524, which makes the manipulation flexible]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the doubly linked lists of Nault into the the market data processing architecture of Kodde, Makhija for the benefit of providing a set of accounting data, organizing the accounting data using doubly linked lists into a central memory of a computer and generating a financial statement (Nault, 0021). As per Claim 14, the rejection of claim 2 is incorporated, and Kodde discloses, wherein each access node contains a reference to a corresponding one of the cell data structures (Kodde, [Fig. 2]). Nault clarifies, wherein each access node contains a reference to a corresponding one of the cell data structures (Nault, [0060 – As per Fig. 3, loading in and organizing the accounting data and accounting transactions in the central memory of the computer, represent the cell data structures; Since the claim does not define ‘cell data structures’, the citation is a valid interpretation]; [0066 - In Fig. 5a, each time an account is created, a new element in the chart structure 508 is created. A new element/access node in the list of pointers 509/access nodes is also created and inserted in the list]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the doubly linked lists of Nault into the the market data processing architecture of Kodde, Makhija for the benefit of providing a set of accounting data, organizing the accounting data using doubly linked lists into a central memory of a computer and generating a financial statement (Nault, 0021). Claims 12-13, 15-16 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Kodde (20150095613) in view of Makhija (20230393784) and Studnitzer et al (20210166315). As per Claim 12, the rejection of claim 2 is incorporated, and Kodde, Makhija disclose a FPGA-based market data processing system. Kodde discloses that the hash table represents a tile. And Read-Only hash tables store static order data. wherein the cell data structures include static order data stored in a data structure that is separate from the one or more tiles ([]). Studnitzer further clarifies, wherein the cell data structures include static order data stored in a data structure that is separate from the one or more tiles (Studnitzer, [0146 – In Fig. 1, user database 102 includes information identifying market participants, e.g. traders, brokers, etc., and other users of electronic trading system 100, such as account numbers or identifiers, user names and passwords; Here account numbers or IDs, user names, passwords are static data]; [0152 - The users may include one or more market makers 130 which maintain a market by providing constant/static bid and offer prices for a derivative or security to the electronic trading system 100]; [0148 – While communicating with the Fig. 1, electronic trading system 100, an user may send and receive trade/orders or other information; Since the claim does not recite how the cell data structures are populated, the citation is a valid interpretation]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the static data of Studnitzer into the market data processing architecture of Kodde, Makhija for the benefit of using a trading system having improved performance under increasing processing transaction loads while providing improved trading opportunities, fault tolerance, low latency processing, high volume capacity, risk mitigation and market protections (Studnitzer, Abstract). As per Claim 13, the rejection of claim 12 is incorporated, and Kodde, discloses, wherein each cell data structure (Kodde, [0041 – In Fig. 1, the order management device 10 receives normalized output commands from data packets decoding device 3 that receives market data streams]) includes fields indicating an instrument, side, price (Kodde, [0042 - instrument ID, side, price]), and an access node index (Kodde, [Fig. 2]; [0042 - Each normalized command 100 comprises an opcode that indicates the type of operation to execute, an order ID/node index, and a field for each characteristic of the order, .i.e. instrument ID, side, price, quantity, etc.]), wherein the processor is enabled to use the instrument, side, price, and the associated access node index (Kodde, [0029 - A hash table associates each key/Order ID an address, computed using a hash function, as shown in Fig. 2. This address is then used to retrieve the value/complete order in memory, thereby implying using the ISP and the associated node index to locate the node]) to locate an associated access node (Kodde, [0099 - Fig. 5: step 509]) within at least one of the tiles (Kodde, [0049,0050 - The information related to each order is maintained in Data Memory 103. The information maintained in the data memory 103 comprises the instrument, the side, i.e. bid/sell, the price, and the quantity of the order. The order related information is stored in the data memory 103 at an address/access node that is computed from hashes based on the order identifier; The corresponding access node index for the address is shown in Fig. 2]). As per Claim 15, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose a FPGA-based market data processing system. Studnitzer further discloses, wherein the one or more financial market functions comprise one or more matching engine books (Studnitzer, [Figs. 11-12]; [0156 – In Fig. 2, electronic trading system 100 includes a match engine function 106 which is implemented by one or more sets 206 of redundant transaction processors 208, i.e. match engines]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the match engines of Studnitzer into the market data processing architecture of Kodde, Makhija for the benefit of improving fault tolerance (Studnitzer, 0066). As per Claim 16, the rejection of claim 1 is incorporated, and Kodde, Makhija disclose a FPGA-based market data processing system. Studnitzer further discloses, wherein the one or more financial market functions comprise one or more market data feeds (Studnitzer, [0006 - Outstanding orders are maintained in one or more data structures or databases referred to as ‘order books’, such orders being referred to as ‘resting’, and made visible, i.e., their availability for trading is advertised, to the market participants through electronic notifications/broadcasts, referred to as market data feeds]; [0007 - The standard protocol that is typically utilized for the transmission of market data feeds is the Financial Information Exchange/FIX protocol Adapted for Streaming FAST, aka FIX/FAST, which is used by multiple exchanges to distribute their market data]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the market data feeds of Studnitzer into the market data processing architecture of Kodde, Makhija for the benefit of having pricing information conveyed by the market data feed to include the prices, or changes thereto, of resting orders, prices at which particular orders were recently traded, or other information representative of the state of the market or changes therein (Studnitzer, 0007). Response to Arguments The Applicant's arguments filed on October 16, 2025 have been fully considered, but they are not persuasive. Applicant argues: ‘This term is readily understood from the specification, the claims, and to one of skill in the art. Paragraph [0009] of the specification explains that…. "market functions".’ (Rem, Pg. 11) Response: The MPEP advises that while the specification provides important context and definitions, it is crucial to avoid importing limitations from the spec into the claims that are not explicitly stated in the claim language. That said, please see the clarified objection. Applicant further argues: ‘Original claim 15,…..further defines "market functions" as comprising one or more "matching engine books"; …..claim 16 depends from claim 1 and defines "market functions" as comprising one or more "market data feeds". (Rem, Pg. 12) Response: This argument is incorrect. As explained in the objection, Claim 1 recites ‘performing’ market functions. ‘Performing’ market functions is not the same as ‘comprising’ machine engine books or ‘comprising’ market feeds. ‘Perform’ refers to the action of doing something, such as carrying out a task, while ‘comprise’ refers to the state of being made up of a whole, meaning to consist of or include. Applicant further argues: The exact term "open orders" is used in at least paragraph [0048] of the specification to describe …..("ISP") may be organized into …. (Rem, Pg. 13) Response: The spec recites ‘open orders’ in numerous embodiments without defining the term in any of them. The MPEP advises that while the specification provides important context and definitions, it is crucial to avoid importing limitations from the spec into the claims that are not explicitly stated in the claim language. Applicant further argues:’ Nonetheless, claims 1 ….amended to recite "one or more memories"’. (Rem, Pg. 15). Response: The amendment is ambiguous, hence inadequate. Please see 112(b). Applicant further argues: ‘However a proper analysis under 35 USC 101 does not stop there. The Examiner has also failed to analyze claim 18 under the test enumerated in Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014)’. (Rem, Pg. 16) Response: This argument is not relevant. The rejection was directed to a CRM. A computer-readable medium/CRM rejection under MPEP §2106.03 occurs when a patent claim is interpreted broadly enough to cover a transitory signal per se, e.g. carrier wave, which is not considered patent-eligible subject matter under 35 U.S.C. §101. To be eligible, the claim must be limited to a non-transitory medium. Applicant further argues: ‘The Examiner is mistaken about the combination of Kodde and Makhija ….suggest….such that each tile is a list of the open orders for a given instrument, side and price,….in claim 1’. (Rem, Pg. 19) Response: This argument is incorrect. Claim 1 recites that each tile comprises an array of access nodes that represent open orders for a given instrument, side, and price. Spec, Para-0034 recites, ‘each access node is uniquely identified by the given ISP and an index value into the array of access nodes’. This suggests that for each access node, both, the ISP and the index are unique. More importantly, the limitation recites a grouping algorithm which has no written description support in the spec (no flowchart, no pseudo code, or detailed textual description). The combination of Kodde, Makhija disclose the composition of each tile wherein Kodde discloses that each tile comprises an array/linked list of access nodes that represent open orders for a given ISP. See Kodde, Fig. 2, which shows a hash table/tile. Kodde, Para-0029 recites, ‘A hash table associates each key (Order ID) an address, computed using a hash function, as represented in FIG. 2. This address is then used to retrieve the value (complete order) in memory’. Kodde, Para-0029 also recites that in case of collisions, implying same ISP, several orders are chained/grouped in each hash table entry. Since each node represents one order, it implies that chained orders form a tile for that given ISP. See clarified O/A. Applicant further argues: In Makhija, the queues hold "memory access commands" (e.g., read, write, erase, program, etc.), and not "order data" as claimed. (Rem, Pg. 19). Response: This argument is incorrect. Memory access commands like read, write, can include ‘order data’. This is supported by spec, Para-0047 which recites, ‘In an FPGA, read/write memory access operations to internal memory caches, such as ….(BRAM), are generally faster than read/write memory access operations to external DRAM’. In the combination of Kodde, Makhija, Makhija, Fig. 4 recites a memory controller that stores and maintains the queues. Makhija, Para-0021 recites, ‘each queue is associated (e.g., by a metadata value that can be stored in the internal memory of the controller associated with the queue)’. It is well known that the memory controller parses the received command and its individual fields to extract and use the necessary information to manage memory operations and store associated metadata. Here the individual fields and/or metadata can include ‘order data’. Therefore Makhija is valid art and it is obvious to combine Kodde and Makhija. Applicant further argues: Thus, the organization of these queues in Makhija is based on……. It is not based on any attributes of an order data "access node" itself - and certainly not based on the claimed "instrument, side, and price"’. (Rem, Pg. 20). Response: This argument is incorrect. Claim 3 recites, ‘….the one or more attributes including at least a sequence, a time received, or a quantity’. Accordingly, the combination of Kodde, Makhija, wherein Makhija recites that the prioritized collections are ordered/sorted based on time and sequence. See clarified O/A. Applicant further argues: ‘In addition, claim 8 requires each tile to contain a 'predetermined number of access nodes for a given instrument, side and price". No such feature is found….cited references’. (Rem, Pg. 21) Response: Neither the claim nor the spec recite why ‘a predetermined’ number of access nodes are used. But spec, Para-0130 recites, ‘a tile 250 may store a predetermined maximum number of access nodes…. such as 256’. Here, ‘predetermined maximum number’ suggests a memory limitation. The combination of Kodde, Makhija, Sukhwani, wherein Kodde discloses an entry allocation core 104. In Kodde, Para-0070 discloses that if the entry allocation core 104 tries to add an entry into the hash table and no slot is available, then the memory is full. This implies a predetermined number of access nodes for each tile. See clarified O/A. Applicant further argues: ‘Regarding claim 12, the cited portions of Studnitzer are are just a generic description of an order book; there is no notion of a "tile" such that orders are grouped together by instrument, side and price….’ (Rem, Pg. 21). Response: This argument is incorrect. The claim does not recite any grouping. The combination of Kodde, Makhija, Studnitzer disclose claim 12, wherein Kodde Fig. 2, shows a hash table which represents a tile. See clarified O/A. Applicant further argues: ‘Claim 13 requires the instrument, side and price to be used to locate the "access node" within a tile - again, no "tiles" are found in Kodde or even Studnitzer’. (Rem, Pg. 21). Response: This argument is incorrect. Claim 13 recites using the ISP and access node index to locate the access node. There is no written description support in the spec to locate the access node using the ISP within a tile. The combination of Kodde, Makhija, Studnitzer disclose claim 13. As mentioned above and in the O/A, Kodde discloses that each tile/hash table is an array/linked list of access nodes. See Fig. 2. Kodde, Para-0029 recites, ‘A hash table associates each key/Order ID an address, computed using a hash function, as shown in Fig. 2. This address is then used to retrieve the value/complete order in memory’, thereby implying using the ISP and the associated node index to locate the node. And Kodde, Fig. 5, discloses at least step 509:locating/finding an access node. See clarified O/A. Applicant further argues: ‘The comment that claim 19 implies there must be a "N:N relationship" between the processors and the memories is also inconsistent …..the specification’. (Rem. Pg. 24). Response: Claim 19 recites, ‘one or more data processors; one or more memories, accessed by the one or more data processors’. This recited configuration is a many-to-many relationship. As recited, the ‘data processor(s)’ can be any kind of processor such as distributed processors, cloud processors, transaction processors, real-time processors, multi-processors etc., and combinations of them. Applicant further argues: ’Furthermore, nowhere does the "Group I" claim 1 recite any requirement for an "FPGA-based data processor. The recitation of that specific type of processor appears only in claim 17; it is improper to read such a limitation into other claims’. (Rem, Pg. 24). Response: Applicant argues that dependent claims 15, 16 of claim 1 define ‘market functions’ recited in claim 1. See Rem, Pg. 12. Similarly, dependent claim 17 of claim 1 defines that the data processor recited in claim 1 uses embedded logic such as FPGA, and is interpreted accordingly. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARVIND TALUKDAR whose telephone number is (303)297-4475. The examiner can normally be reached M-F, 10 am-6pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Arvind Talukdar Primary Examiner Art Unit 2132 /ARVIND TALUKDAR/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Jul 12, 2025
Non-Final Rejection — §101, §103
Oct 16, 2025
Response Filed
Dec 27, 2025
Final Rejection — §101, §103
Apr 02, 2026
Request for Continued Examination
Apr 06, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602317
MEMORY DEVICE HARDWARE HOST READ ACTIONS BASED ON LOOKUP OPERATION RESULTS
2y 5m to grant Granted Apr 14, 2026
Patent 12591520
LINEAR TO PHYSICAL ADDRESS TRANSLATION WITH SUPPORT FOR PAGE ATTRIBUTES
2y 5m to grant Granted Mar 31, 2026
Patent 12591382
STORAGE DEVICE OPERATION ORCHESTRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579074
HARDWARE PROCESSOR CORE HAVING A MEMORY SLICED BY LINEAR ADDRESS
2y 5m to grant Granted Mar 17, 2026
Patent 12566712
A RING BUFFER WITH MULTIPLE HEAD POINTERS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
84%
With Interview (+3.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month