Prosecution Insights
Last updated: April 19, 2026
Application No. 18/461,777

HMB Random and Sequential Access Coherency Approach

Non-Final OA §103
Filed
Sep 06, 2023
Examiner
BARTELS, CHRISTOPHER A.
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Western Digital Technologies Inc.
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
364 granted / 547 resolved
+11.5% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
40 currently pending
Career history
587
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
66.9%
+26.9% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-18, 20, AND 21 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection mailed on 08/05/2025. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/26/2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-18, 20, and 21 are rejected 35 U.S.C. 103 as being unpatentable over of Bender et al. (USPGPUB No. 2005/0091383 A1, hereinafter referred to as Bender) in view of Dalal (USPGPUB No. 2023/0231811 A1) and further in view of Haghighat et al. (USPGPUB No. 2021/0263779 A1, hereinafter referred to as Haghighat) and further in view of Anderson et al. (USPGPUB No. 2019/0146790 A1, hereinafter referred to as Anderson). Referring to claim 1, Bender discloses a data storage device {“cached translation table”, [0237]}, comprising: a memory device {data storage device “Local Mapping Table” comprising a plurality of memory devices “internal data tables, and scratch pad work areas”, see Fig. 19 [0211]}; and a controller coupled to the memory device {“Media Access Controller (MAC)… provides [couples to] the adapter [comprising the data storage device]”, see Fig. 38 [0535], 1st sentence}, wherein the controller is configured to: initiate a read command {“master side transmits one [DMA] read request packet to the target”, see Fig. 36 [0524], 1st sentence} to a direct memory access (DMA) module {“DMA traffic can be set up to arbitrate” initiated by the controller as claimed, see Fig. 38 [0536] 6th and 7th sentences}; Bender does not appear to explicitly disclose pause writing to the memory device from a cache; retrieve data associated with the read command from cache; and return the retrieved data as read data of the read command. However, Dalal discloses pause writing to the memory device from a cache {“when waiting for concurrency locks”, see Figs. 56B and 59a [0242] 1st sentence}; retrieve data associated with the read command from cache {“read and write which may be part of a direct memory access (DMA) read or write operation” (see Fig. 59a, [0285]) from cache “the memory space allocated to it, and the location of the session context in the processor cache” ([0297] last sentence).}; and return the retrieved data {“The cache contents stored therein can then be retrieved [data] and prefetched during context switch back to a previous session”, see Fig. 59a [0307] 3rd sentence} as read data of the read command {“[read/write] bulk transfer mechanism to transfer out session context to a local memory 5908g”, see Fig. 59a [0307] 2nd sentence}. Bender and Dalal are analogous because they are from the same field of endeavor, managing DMA transfer(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bender and Dalal before him or her, to modify Bender’s “cached translation table” ([0237]) incorporating Dalal’s “Accelerator Coherency Port (ACP) allows for coherent supplementation of the cache” (see Fig. 31, [0172]). The suggestion/motivation for doing so would have been to implement Xockets software implementation, in which such software sockets allow a natural partitioning of these loads between processors, ARM and x86 processors while recognizing light touch loads with frequent random-accesses, can be kept behind the socket abstraction on the ARM cores, while high-power number crunching code can use the socket abstraction on the x86 cores (Dalal [0093] paraphrased). Therefore, it would have been obvious to combine Dalal with Bender to obtain the invention as specified in the instant claim(s). However, neither Bender or Dalal appears to explicitly disclose wherein the initiation of the read command comprises pauses writes to the memory device from a cache; retrieving data associated with the read command from cache during the pause; and unpausing writes to the memory device from the cache after returning the retrieved data. However, Haghighat discloses wherein the initiation of the read command {“meta-scheduler may initiate a data [read/write] migration operation in”, see Fig. 10c [0269]} comprises pauses writes to the memory device from a cache {“enough data has migrated, the [read/writes] action can proceed.” (see Fig. 10c [0269]) such actions include writes “enforce a [cache] policy that prevents writes to [memory device] executable memory within a non-root protection domain” (see Fig. 49b [1168] 1st sentence); another example such cache-to-memory movement “cache mappings from tags to key IDs in a tag to [memory device] key ID lookaside buffer” (see Fig. 49E, [1162])}; retrieving data associated with the read command {associated data “EIC information readily [retrieved] substituted for the capability information 4802, 4803 (FIG. 48A)”, see Fig. 48B [1128]} from cache during the pause {“define ABIs for EIC that covers stack [read/write] accesses” (see Fig. 52a, [1204], 1st sentence) during the pause including but not limited to “copying data in shared memory settings” (see Fig. 48a [1127] last two sentences); shared memory settings that include “Data accesses may even be further sub-divided as reads, writes, etc., with separate tag mask structures being defined for each type of access” (see Fig. 49E, [1163], last sentence)}; and unpausing writes to the memory device {“ABI specifies how the respective loading/unloading is performed in the reverse if a callee completes and the caller needs to resume from the point that the callee completed”, see Fig. 52a, [1204]} from the cache after returning the retrieved data {“detect a context switch from the first function to a second function, and map [unpausing the writing] the key identifier to a second key in response to the context switch”, see Fig. 49E [1170]}; Bender/Dalal and Haghighat are analogous because they are from the same field of endeavor, managing DMA transfer(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bender/Dalal and Haghighat before him or her, to modify Bender/Dalal’s device incorporating Haghighat’s “define ABIs for EIC that covers stack [read/write] accesses” (see Fig. 52a, [1204], 1st sentence)). The suggestion/motivation for doing so would have been to implement a serverless services architecture 203e, such as one or more hardware associated elements such as hardware assisted virtual machines, CPUs, GPUs and accelerators (see Fig. 2a, [0005], last sentence) as an alternative to the challenges and shortcomings of existing Faas Solutions (Haghighat [0006], 1st sentence) including an extra layer of abstraction, which makes it more difficult to expose and exercise distinctive and/or new infrastructure features in processors, platforms, or systems, such as computer architectures for supporting hardware heterogeneity by existing FaaS solutions (Haghighat [0008], 1st sentence). Therefore, it would have been obvious to combine Haghighat with Bender/Dalal to obtain the invention as specified in the instant claim(s). However, neither one in the group consisting of Bender, Dalal, and Haghighat appears to explicitly disclose scanning overlapping lines of the read command in the cache in response to pausing the writes; generating a sorted list of address and data from the overlapping lines; retrieving data associated with the sorted list of address and data from the overlapping lines of the read command from the cache during the pause; However, Anderson discloses scanning overlapping lines of the read command {“Address generators 2211/2221 output 512-bit aligned addresses that overlap” (see Fig. 22, [0141]), the address generator 2211 in response to read/write command as claimed “each generate one new non-aligned request per cycle” ([0141])} in the cache in response to pausing the writes {“[pausing and] dispatch requests as quickly as possible while retaining fairness between the two streams”, see Fig. 23, [0154]}; generating a sorted list of address and data {“ includes a micro-TLB (table look-aside buffer) block to perform address translation”, see Fig. 5, [0057], last three sentences} from the overlapping lines {“μTLB 2212/2222 converts a single 48-bit virtual address to a 44-bit physical address each cycle. Each μTLB 2212/2222 has 8 [cache] entries,”, see Fig. 21, [0142], 2nd sentence}; retrieving data associated with the sorted list of address and data {“ Snoop and DMA transactions overlapped [with associated data]; MMU-UMC Page table walks from L2, and any DVM messages; MMU-PMC uTLB miss to MMU”, see Fig. 25, [0183], last sentence} from the overlapping lines of the read command from the cache {“ PMC-UMC 512-bit Read, which supports 2 dataphase reads”, see Fig. 25, [0183]} during the pause {“victims cannot interfere with non-blocking requests such as snoop responses” (see Fig. 25, [0011], last sentence) referred to as “L1D … by a victim cache” ([0181]) for pausing on blocks marks as exclusive/invalid according “fully MESI support” ([0192], 1st sentence)}. wherein the initiation of the read command {“meta-scheduler may initiate a data the key identifier to a second key in response to the context switch”, see Fig. 49E [1170]}; Bender/Dalal/Haghighat and Anderson are analogous because they are from the same field of endeavor, managing DMA transfer(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bender/Dalal/Haghighat and Anderson before him or her, to modify Bender/Dalal/Haghighat’s system incorporating Anderson’s “Address generators 2211/2221” (see Fig. 52a, [1204], 1st sentence)). The suggestion/motivation for doing so would have been to implement a serverless services architecture 203e, such as one or more hardware associated elements such as hardware assisted virtual machines, CPUs, GPUs and accelerators (see Fig. 2a, [0005], last sentence) as an alternative to the challenges and shortcomings of existing Faas Solutions (Haghighat [0006], 1st sentence) including an extra layer of abstraction, which makes it more difficult to expose and exercise distinctive and/or new infrastructure features in processors, platforms, or systems, such as computer architectures for supporting hardware heterogeneity by existing FaaS solutions (Haghighat [0008], 1st sentence). Therefore, it would have been obvious to combine Haghighat with Bender/Dalal to obtain the invention as specified in the instant claim(s). As per claim 2, the rejection of claim 1 is incorporated and Dalal discloses wherein the cache is a host memory buffer (HMB) {host memory buffer “large DRAM buffer on each [host] Xockets DIMM”, see Fig. 59a [0226]}. As per claim 3, the rejection of claim 1 is incorporated and Bender discloses wherein the data storage device does not include dynamic random access memory (DRAM) {the storage device contains “Local Mapping Table which is found within the SRAM in the adapter” but not DRAM, see Figs. 36 and 38, [0487] last two sentences}. As per claim 4, the rejection of claim 1 is incorporated and Dalal discloses wherein the controller is configured to read data from the memory device {“a common storage is constructed from metadata communicated between Xockets DIMMs” with respect to memory device ([, see Fig. 55 [0233]} and substitute the read data with the retrieved data {“mediated by Xockets DIMMs acting as intelligent switches to [substitute respective read/write data] offload the TOR switch”, see Figs. 55 and 59a [0233]}. As per claim 5, the rejection of claim 1 is incorporated and Dalal discloses wherein the DMA is configured to request the cache {the cache “Accelerator Coherency Port (ACP) allows for coherent supplementation of the cache throughout the FPGA” (see Figs. 31 and 59a [0172]) is requested “local Xockets DIMM upon requesting a certain address range… [via] the requesting user process” (see Fig. 12, [0132]}} to pause writing {“mmap routine can [pause] trap and execute the code of the Xockets driver, which in turn can issue the correct set of write and read commands to Xockets Memory 1222 to produce and return the sought after data”, see Figs. 12 and 59a [0132] last sentence}. As per claim 6, the rejection of claim 5 is incorporated and Bender discloses wherein the controller is configured to: scan overlapping lines in the cache {the controller “MAC arbiter 210 comprises two portions: a send block and a receive block [for claimed overlapping scanning]”, see Fig. 37 [0536], 1st sentence} in response to the request {“lanes carrying DMA traffic can be set up to arbitrate on a single flit basis” that DMA traffic associated with a request “Descriptor Sequence Number is a monotonically increasing counter that increments whenever a new descriptor is used” ([0598]) in turn that Descriptor Sequence in part of the “[cache] LMT entry associated with that channel” ([0233] 3rd sentence)}; Dalal discloses reply to the DMA that the pause has occurred {“reads and writes to a particular address are resolved through a trap handler”, see Figs. 51 and 59a [0205] last two sentences}; and generate a sorted list of address {“translation look-aside buffer (TLB) 5904a can be used for such [virtual address to physical] translation.”, see Fig. 59a [0270], 3rd and 4th sentences} and data from the overlapping lines {arbiter 1590f can determine which requestor becomes the accessor and then passes [the overlapped] data from the accessor to the resource interface, and the downstream resource can begin execution on the data”, see Fig. 59a [0289] 3rd sentence}. As per claim 7, the rejection of claim 6 is incorporated and Dalal discloses wherein the controller is configured to determine whether the sorted list is empty {“Once the [overlapped lines] packets are written to a main memory using DMA operation, an IOTLB entry can be updated [and/or emptied]” as appropriate, see Fig. 76 [0394] 1st sentence}. As per claim 8, the rejection of claim 7 is incorporated and Dalal discloses wherein in response to determining that the sorted list is not empty {“such entries in the [non-empty] IOTLB may be locked so that they are not erased during subsequent cache flushes”, see Fig. 76 [0394] 2nd sentence}, provide a head {“classifies the headers for session identification and packet-level applications”, see Figs. 51 and 59a, [0202] 3rd sentence} of the sorted list to the DMA {“[with appropriate header/pointer] Translation information is fetched either from the IOTLB 7806”, see Fig. 78 [0395] 4th sentence}. As per claim 9, the rejection of claim 8 is incorporated and Dalal discloses wherein the DMA is configured to: perform a read from the memory device {during or subsequent to step 7716 perform “the DMA request is targeted to the physical address generated in step 7716”, see Fig. 77 [0393]}; and determine whether data associated with the read from the memory device {“an IOMMU may follow to serve input I/O requests” serviced to/from the memory device, see Figs. 76, 74, and 59a [0391] 1st sentence} belongs to a same address as the head of the sorted list {“can perform[/determine] an address translation to identify the physical address corresponding to the virtual addresses it is supplied with 7716.”, see Fig. 77 and 59a [0393]}. As per claim 10, the rejection of claim 9 is incorporated and Dalal discloses wherein the DMA is configured to: replace the data associated {“switch can reintroduce traffic management, classification and prioritization to create [and/or replace the data associated] flow characteristics for packets of a session”, see Fig. 77, 3rd sentence from end of [0393]} with the read from the memory device with the data from the sorted list {“packets written to a memory location [specified by sorted list IOTLB] are intercepted by a second virtual switch”, see Fig. 77 and 59a, [0393]}; and return the data from the sorted list {“ Traffic managed flows can be written to various offload processors at step 7720”, see Fig. 77, [0393] last sentence} to a destination that instructed the read command {instructed as claimed “Translation information is fetched either from the IOTLB 7806” continuing destination of respective offloading processor(s) (e.g. 7804), see Fig. 78 [0395] 3rd sentence}. As per claim 11, the rejection of claim 10 is incorporated and Dalal discloses wherein the controller is configured to pop the data from the sorted list from the sorted list {per the IOTLB instruction type “instructions in the pipeline being executed, a stack pointer and program counter, instructions and data that are prefetched and waiting to be executed by the session”, (see Fig. 77 [0451], 1st sentence) in an Intel x86 architecture with an appropriate push/pop mechanism “Xockets DIMM Stack in communication with an x86 Stack” ([0197], 1st sentence, “stack 5102”, see Fig. 51, [0200])}. Referring to claim 12, Bender discloses a data storage device, comprising {“cached translation table”, [0237]}: a memory device {data storage device “Local Mapping Table” comprising a plurality of memory devices “internal data tables, and scratch pad work areas”, see Fig. 19 [0211]}; and a controller coupled to the memory device {“Media Access Controller (MAC)… provides [couples to] the adapter [comprising the data storage device]”, see Fig. 38 [0535], 1st sentence}, wherein the controller is configured to: Bender does not appear to explicitly disclose pause a cache operation of writing data to the memory device; substitute data from the memory device with data from the cache; and resume the cache operation after the substituting. Dalal discloses pause a cache operation of writing data to the memory device {“when waiting for concurrency locks”, see Figs. 56B and 59a [0242] 1st sentence}; substitute data from the memory device {“switch can reintroduce traffic management, classification and prioritization to create [and/or substitute the data associated] flow characteristics for packets of a session”, see Fig. 77, 3rd sentence from end of [0393]} with data from the cache {“include using a Xockets tunnel 5606a-0, memory trap 5606a-1 and page-cache 5606a-2”, see Fig. 56a, [0234], 3rd sentence}; and resume the cache operation after the substituting {“subsequently retrieve the context information to resume the prior task”, see Fig. 60-0, [0319] 2nd sentence}. However, neither Bender or Dalal appears to explicitly disclose wherein the controller is configured to initiate a cache operation or a direct memory access (DMA) operation to execute a read command received from a host, wherein the DMA operation comprises: pausing data writes to the memory device from a cache; substitute data associated with the read command from the memory device with data from the cache during the pausing of the cache operation; and resume the cache operation after the substituting. However, Haghighat discloses wherein the controller is configured to initiate a cache operation {Examiner’s note: recitation the “or” term renders this claim as a Markush claim, thus the reference needs only disclose one member in the group to address the claim} or a direct memory access (DMA) operation {“accelerated with a cache by hardware mechanisms in processors, memory controller, or address translation services (ATS) by various DMA-capable devices”, see Fig. 20D, [0595], last sentence} to execute a read command {“meta-scheduler may initiate a data [read/write] migration operation in”, see Fig. 10c [0269]} received from a host {“native to hardware (e.g., host processor, central processing unit/CPU, microcontroller”, see Figs. 10a and 10b, [0266], last sentence.}, wherein the DMA operation comprises: pausing data writes to the memory device from a cache {“enough data has migrated, the [read/writes] action can proceed.” (see Fig. 10c [0269]) such actions include writes “enforce a [cache] policy that prevents writes to [memory device] executable memory within a non-root protection domain” (see Fig. 49b [1168] 1st sentence); another example such cache-to-memory movement “cache mappings from tags to key IDs in a tag to [memory device] key ID lookaside buffer” (see Fig. 49E, [1162])}; substitute data associated with the read command from the memory device {associated data “EIC information readily [retrieved] substituted for the capability information 4802, 4803 (FIG. 48A)”, see Fig. 48B [1128]} with data from the cache during the pausing of the cache operation {“define ABIs for EIC that covers stack [read/write] accesses” (see Fig. 52a, [1204], 1st sentence) during the pause including but not limited to “copying data in shared memory settings” (see Fig. 48a [1127] last two sentences); shared memory settings that include “Data accesses may even be further sub-divided as reads, writes, etc., with separate tag mask structures being defined for each type of access” (see Fig. 49E, [1163], last sentence)}; and resume the cache operation {“ABI specifies how the respective loading/unloading is performed in the reverse if a callee completes and the caller needs to resume from the point that the callee completed”, see Fig. 52a, [1204]} after the substituting {“detect a context switch from the first function to a second function, and map [unpausing the writing] the key identifier to a second key in response to the context switch”, see Fig. 49E [1170]}. Bender/Dalal and Haghighat are analogous because they are from the same field of endeavor, managing DMA transfer(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bender/Dalal and Haghighat before him or her, to modify Bender/Dalal’s device incorporating Haghighat’s “define ABIs for EIC that covers stack [read/write] accesses” (see Fig. 52a, [1204], 1st sentence)). The suggestion/motivation for doing so would have been to implement a serverless services architecture 203e, such as one or more hardware associated elements such as hardware assisted virtual machines, CPUs, GPUs and accelerators (see Fig. 2a, [0005], last sentence) as an alternative to the challenges and shortcomings of existing Faas Solutions (Haghighat [0006], 1st sentence) including an extra layer of abstraction, which makes it more difficult to expose and exercise distinctive and/or new infrastructure features in processors, platforms, or systems, such as computer architectures for supporting hardware heterogeneity by existing FaaS solutions (Haghighat [0008], 1st sentence). Therefore, it would have been obvious to combine Haghighat with Bender/Dalal to obtain the invention as specified in the instant claim(s). As per claim 13, the rejection of claim 12 is incorporated and Dalal discloses wherein short accesses utilize cache {“Large blocks are then exported without delay [for short access timing], or consuming the L2 cache, using the ACP” [0184] 4th sentence}. As per claim 14, the rejection of claim 13 is incorporated and Bender discloses wherein input/output (I/O) operations utilize DMA {input/output “provide data movement services for message passing protocols such as IP and MPI” ([0541], 1st sentence) that utilizes “DMA packets are constructed by the Interpartition Communication facility on the chip, and are up to 2K bytes” ([0541], 3rd sentence)}. As per claim 15, the rejection of claim 14 is incorporated and Bender discloses wherein short accesses include physical region page (PRP) {“modify the Local Mapping Table [short access/caching].” ([0211] 1st sentence) “The [short] access is preferably made through the node's address translation logic and page tables to insure that only authorized software modifies the facility.” ([0211] 2nd sentence) translating “converts effective address values managed by user code into a real address values used by processor hardware” ([0112] 5th sentence)}, scatter gather list (SGL) {“are scattered and [gathered] merged into 16 different byte locations within a 128 byte [SGL] assembly buffer within”, [0584] 4th sentence}, submission queues (SQ) {“schedules [submits] work by adding the indicated channel to one of two work queues. These queues keep track of all the channels ready for some kind of send processing”, [0232] 2nd and 3rd sentences}, and completion queues (CQ) {“’last’ packet with the completion code field indicating the type of abnormal condition detected” ([0417] last sentence) where the packet stored in a queue}. As per claim 16, the rejection of claim 15 is incorporated and Dalal discloses, wherein the short accesses are through a controller memory buffer (CMB) controller {short accesses Large blocks are then exported without delay [for short access timing], or consuming the L2 cache, using the ACP” ([0184] 4th sentence) made through controller “ACP mapper 3106” (see Figs. 31 and 59a, [0173]). As per claim 17, the rejection of claim 12 is incorporated and Dalal discloses wherein the controller is configured to: generate a sorted list of address {“translation look-aside buffer (TLB) 5904a can be used for such [virtual address to physical] translation.”, see Fig. 59a [0270], 3rd and 4th sentences} plus data from overlapping lines in cache {arbiter 1590f can determine which requestor becomes the accessor and then passes [the overlapped] data from the accessor to the resource interface, and the downstream resource can begin execution on the data”, see Fig. 59a [0289] 3rd sentence}; provide the sorted list {“classifies the headers for session identification and packet-level applications”, see Figs. 51 and 59a, [0202] 3rd sentence} to DMA {“[with appropriate header/pointer] Translation information is fetched either from the IOTLB 7806”, see Fig. 78 [0395] 4th sentence}; determine that data associated with a read command is in the sorted list {“an IOMMU may follow to serve input I/O requests” serviced to/from the memory device, see Figs. 76, 74, and 59a [0391] 1st sentence}; and perform the substituting based upon the determining {“switch can reintroduce traffic management, classification and prioritization to create [and/or substituting the data associated] flow characteristics for packets of a session”, see Fig. 77, 3rd sentence from end of [0393]}. Referring to claim 18, Bender discloses a data storage device {“cached translation table”, [0237]}, comprising: Means for storing data {“sending side… [it gather all of the information it needs [storing] into working registers”, [0455]}; and a controller coupled to the means for storing data {“Media Access Controller (MAC)… provides [couples to] the adapter [comprising the data storage device]”, see Fig. 38 [0535], 1st sentence}, wherein the controller is configured to: initiate a read command {input/output “provide [read/write] data movement services for message passing protocols such as IP and MPI” ([0541], 1st sentence) that utilizes “DMA packets are constructed by the Interpartition Communication facility on the chip, and are up to 2K bytes” ([0541], 3rd sentence; determine that an address associated {determining performed by “MAC arbiter 210 comprises two portions: a send block and a receive block [for claimed overlapping scanning]”, see Fig. 37 [0536], 1st sentence} with the read command overlaps with an address in cache {“lanes carrying DMA traffic can be set up to arbitrate on a single flit basis” that the read command DMA traffic associated with a request “Descriptor Sequence Number is a monotonically increasing counter that increments whenever a new descriptor is used” ([0598]) in turn that Descriptor Sequence in part of the “[cache] LMT entry associated with that channel” ([0233] 3rd sentence)}; Bender does not appear to explicitly disclose prioritize a direct memory access (DMA) operation over a cache read operation; and execute the read command using data from the cache and not from the means for storing data. However, Dalal discloses prioritize a direct memory access (DMA) operation {“management, classification and prioritization to create flow characteristics for packets of a [RDMA] session” (see Fig. 77, 3rd sentence from bottom of [0393]) utilizing “remote RDMAs can extend this framework by allowing the same virtual switch to handle complex transport and the parsing of other data sources on the rack” ([0095]) over a cache read operation {versus cache read “and/or read data to be read out from the processing module” (see Fig. 60-0, [0315]) that includes cache data “offload processors 6008 can include a cache memory configured to store context information” (last two sentences [0317])}; and execute the read command using data from the cache {“storing of context information can include copying an offload processor 6008 cache.”, [0319] last sentence} and not from the means for storing data {“store current context information, and then switch to a new computing task, then subsequently retrieve the context information to resume the prior task”, [0319] 2nd sentence}. Bender and Dalal are analogous because they are from the same field of endeavor, managing DMA transfer(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bender and Dalal before him or her, to modify Bender’s “cached translation table” ([0237]) incorporating Dalal’s “Accelerator Coherency Port (ACP) allows for coherent supplementation of the cache” (see Fig. 31, [0172]). The suggestion/motivation for doing so would have been to implement Xockets software implementation, in which such software sockets allow a natural partitioning of these loads between processors, ARM and x86 processors while recognizing light touch loads with frequent random-accesses, can be kept behind the socket abstraction on the ARM cores, while high-power number crunching code can use the socket abstraction on the x86 cores (Dalal [0093] paraphrased). Therefore, it would have been obvious to combine Dalal with Bender to obtain the invention as specified in the instant claim(s). Neither Bender or Dalal appears to explicitly disclose initiate a read command of a plurality of read commands, wherein the plurality of read commands are received in an order; wherein the DMA operation comprises: skipping reading data for the address associated with the read command: inserting a placeholder for data associated with the address associated with the read command; retrieving data for the addresses associated with each of the plurality of read commands except the read command ; delivering data associated with each of the plurality of read commands in the order received; and execute executing the read command using data from the cache and not from the means for storing data. However, Haghighat discloses initiate a read command of a plurality of read commands {“ may consume the update made by X (a read-after-write, or, RAW dependency) or overwrite it (a write-after-write, or WAW dependency”, see Fig. 12, [0287], last two sentences}, wherein the plurality of read commands are received in an order {“updated by serializable sequences of operations”, see Fig. 12, [0286], 1st sentence}; wherein the DMA operation comprises: skipping reading data for the address associated with the read command {“ privileged mode path 5204 bypasses the [read/write command] capability constraints of the EIC information 5202.”, see Fig. 52a, [1204], last sentence}: inserting a placeholder for data associated {associated data “EIC information readily [retrieved] substituted for the capability information 4802, 4803 (FIG. 48A)”, see Fig. 48B [1128]} with the address associated with the read command {“platform's entire memory (e.g., various cache levels) with a single key”, see Fig. 48F, [1141], 1st sentence}; retrieving data for the addresses {“define ABIs for EIC that covers stack [read/write] accesses” (see Fig. 52a, [1204], 1st sentence) during the pause including but not limited to “copying data in shared memory settings” (see Fig. 48a [1127] last two sentences) associated with each of the plurality of read commands except the read command {shared memory settings that include “Data accesses may even be further sub-divided as reads, writes, etc., with separate tag mask structures being defined for each type of access” (see Fig. 49E, [1163], last sentence); delivering data associated with each of the plurality of read commands in the order received {“ABI specifies how the respective loading/unloading is [delivered] performed in the reverse if a callee completes and the caller needs to resume from the point that the callee completed”, see Fig. 52a, [1204]}; and execute executing the read command using data from the cache and not from the means for storing data {“detect a context switch from the first function to a second function, and map [executing the read command’s] key identifier to a second key in response to the context switch” (see Fig. 49E [1170]) from cache “cache mappings from tags to key IDs in a tag to [memory device] key ID lookaside buffer” (see Fig. 49E, [1162])}. Bender/Dalal and Haghighat are analogous because they are from the same field of endeavor, managing DMA transfer(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bender/Dalal and Haghighat before him or her, to modify Bender/Dalal’s device incorporating Haghighat’s “define ABIs for EIC that covers stack [read/write] accesses” (see Fig. 52a, [1204], 1st sentence)). The suggestion/motivation for doing so would have been to implement a serverless services architecture 203e, such as one or more hardware associated elements such as hardware assisted virtual machines, CPUs, GPUs and accelerators (see Fig. 2a, [0005], last sentence) as an alternative to the challenges and shortcomings of existing Faas Solutions (Haghighat [0006], 1st sentence) including an extra layer of abstraction, which makes it more difficult to expose and exercise distinctive and/or new infrastructure features in processors, platforms, or systems, such as computer architectures for supporting hardware heterogeneity by existing FaaS solutions (Haghighat [0008], 1st sentence). Therefore, it would have been obvious to combine Haghighat with Bender/Dalal to obtain the invention as specified in the instant claim(s). As per claim 20, the rejection of claim 19 is incorporated and Haghighat discloses wherein the DMA operation comprises: replacing the placeholder with the data associated {associated data “EIC information readily [retrieved] substituted for the capability information 4802, 4803 (FIG. 48A)”, see Fig. 48B [1128]} with the address from the cache {“platform's entire memory (e.g., various cache levels) with a single key”, see Fig. 48F, [1141], 1st sentence}. As per claim 21, the rejection of claim 12 is incorporated and Haghighat discloses wherein the substitute data is accessible from the memory device {associated data “EIC information readily [retrieved] substituted for the capability information 4802, 4803 (FIG. 48A)”, see Fig. 48B [1128]} using cache or DMA {“platform's entire memory (e.g., various cache levels) with a single key”, see Fig. 48F [1141]} based on an access time {“When a [access time] context switch from the first function 4904 to a second function 4908 occur”, see Fig. 49C [1150]}. Response to Arguments Applicant’s arguments, filed on 09/26/2025, have been considered however rendered moot in view of the new ground of rejection(s). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references indicative of the current state of the art regarding claim 1’s “DMA”, “memory device from a cache”, or claim 2’s “memory buffer”: US 20250224890 A1, US 11893248 B2, US 11693605 B2, US 20230067236 A1, US 20200117378 A1, US 10360984 B2, and US 20170285940 A1. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A. BARTELS whose telephone number is (571)270-3182. The examiner can normally be reached on Monday-Friday 9:00a-5:30pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dr. Henry Tsai can be reached on 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.A.B./ Examiner Art Unit 2184 /HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Dec 28, 2024
Non-Final Rejection — §103
Mar 04, 2025
Interview Requested
Mar 13, 2025
Applicant Interview (Telephonic)
Mar 20, 2025
Examiner Interview Summary
Apr 02, 2025
Response Filed
Jul 30, 2025
Final Rejection — §103
Sep 10, 2025
Interview Requested
Sep 26, 2025
Response after Non-Final Action
Oct 27, 2025
Request for Continued Examination
Oct 29, 2025
Response after Non-Final Action
Dec 13, 2025
Non-Final Rejection — §103
Mar 20, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602339
STRAIN RELIEF FOR FLOATING CARD ELECTROMECHANICAL CONNECTOR
2y 5m to grant Granted Apr 14, 2026
Patent 12596662
METHOD FOR INTEGRATING INTO A DATA TRANSMISSION A NUMBER OF I/O MODULES CONNECTED TO AN I/O STATION, STATION HEAD FOR CARRYING OUT A METHOD OF THIS TYPE, AND SYSTEM HAVING A STATION HEAD OF THIS TYPE
2y 5m to grant Granted Apr 07, 2026
Patent 12579090
METHOD AND SYSTEM FOR SHIFTING DATA WITHIN MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572491
MEMORY WITH CACHE-COHERENT INTERCONNECT
2y 5m to grant Granted Mar 10, 2026
Patent 12572486
Subgraph segmented optimization method based on inter-core storage access, and application
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
79%
With Interview (+12.8%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month