Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,076

FINE-GRAINED DATA MOVER

Non-Final OA §103
Filed
Jul 18, 2024
Examiner
PINGA, JASON MICHAEL
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
3 (Non-Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
1y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
4 granted / 4 resolved
+45.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
19 currently pending
Career history
23
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
56.9%
+16.9% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/21/2026 has been entered. Response to Amendment This Office action is in response to Applicant' s communication filed 1/21/2026 in response to the Office action dated 11/21/2025. Claims 1, 8, and 15 have been amended. Claims 1-20 are pending in this application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Murphy (US 20150012717 A1) in view of He et al. (US 20170046102 A1), hereinafter He, and further in view of Taylor et al. (US 9558232 B1), hereinafter Taylor, and Teh et al. (US 20220100423 A1), hereinafter Teh. Regarding claim 1, Murphy teaches a method for offloading memory access workloads from a host system in a distributed memory architecture (Paragraphs 13-14; Fig. 1, processors 102-1 to 102-P [host system] send requests to main memory devices 104-1 to 104-M), the method comprising: at a processor on a memory controller (Paragraph 25; Fig. 3, memory operations are performed by vault controller 328 which is a hardware logic unit [processor] that functions analogously to a memory controller): receiving a data mover call, the data mover call comprising a request to copy values, a request to gather values, a request to scatter values, or a request to set values (Paragraph 18; Fig. 2, in response to a request [call] from processor 202, performing a data gather operation), the data mover call specifying one or more source (Paragraphs 18, 23; Fig. 2, the request [call] specifies data in a data structure which may be in multiple, scattered [source] locations within memory portion 214) and one or more destination memory locations (Paragraph 18; Fig. 2, gathered data being stored in cache line 216 containing multiple entries [destination locations]); determining that all the data mover tasks for the data mover call have completed (Paragraph 23; Fig. 2, completing the data modifications [tasks] in response to a request [call] from processor 202); and sending a response to the data mover call to the host (Paragraph 23; Fig. 2, sending the modified data [response] to processor 202 [host]). Murphy does not explicitly teach creating a set of data mover tasks based upon the data mover call, the data mover tasks created based upon a specified mapping between data mover calls and data mover tasks, each data mover task including a read phase and a write phase; allocating each data mover task of the set to one of a plurality of memory interface slices, each memory interface slice communicating with one or more memory devices over a memory fabric; executing each data mover task in the set by executing each phase of each particular task of the set of data mover tasks by issuing memory access commands corresponding to the phase of the particular one of the set of data mover tasks, wherein each memory interface slice breaks down allocated data mover tasks into fabric protocol operations; concurrently executing a plurality of the fabric protocol operations of the set of data mover tasks on the plurality of memory interface slices, the plurality of fabric protocol operations targeting memory locations specified in the data mover call; and performing write combining by passing, via a write data exporting buffer of a first memory interface slice, a partial write from a first data mover task to a second memory interface slice, wherein the second memory interface slice merges the partial write with a write from a second data mover task to reduce a number of fabric protocol operations. However, He teaches allocating each data mover task of the set to one of a plurality of memory interface slices, each memory interface slice communicating with one or more memory devices over a memory fabric (Paragraphs 38, 40-41; Figs. 1 and 2, NAND Flash interface driver 200 directs [data mover] commands/tasks to a plurality of job processors 201 [interface slices], wherein the plurality of job processors 201 communicates with a plurality of NAND flash memory units 101 via multiplexing circuitry 202 [memory fabric], wherein each memory interface slice breaks down allocated data mover tasks into fabric protocol operations (Paragraph 83, NAND flash interface breaks down a task into a sequence of interface signals [fabric protocol operations]); and concurrently executing a plurality of the fabric protocol operations of the set of data mover tasks on the plurality of memory interface slices, the plurality of the fabric protocol operations targeting memory locations specified in the data mover call (Paragraphs 40-41, 66, 83; Fig. 2, job processors 201 [memory interface slices] work in parallel [concurrently execute] on jobs (consisting of tasks which consist of interface signals [fabric protocol operations]) that target addresses [memory locations]). Murphy and He are analogous art because they are in the same field of endeavor, that being data movement management. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Murphy to further include the breaking down of tasks into fabric protocol operations according to the teachings of He. The motivation for doing so would have been to increase compatibility with different NAND flash structures (He, Paragraph 87). Murphy in view of He does not explicitly teach creating a set of data mover tasks based upon the data mover call, the data mover tasks created based upon a specified mapping between data mover calls and data mover tasks, each data mover task including a read phase and a write phase; executing each data mover task in the set by executing each phase of each particular task of the set of data mover tasks by issuing memory access commands corresponding to the phase of the particular one of the set of data mover tasks; and performing write combining by passing, via a write data exporting buffer of a first memory interface slice, a partial write from a first data mover task to a second memory interface slice, wherein the second memory interface slice merges the partial write with a write from a second data mover task to reduce a number of fabric protocol operations. However, Taylor teaches creating a set of data mover tasks based upon the data mover call (Col. 16, lines 3-6, partitioning a bulk copy operation [call] into multiple, smaller copy requests [set of tasks]), the data mover tasks created based upon a specified mapping between data mover calls and data mover tasks (Col. 16, lines 28-43; Fig. 6, creating a specific number of corresponding requests [tasks] that target [map to] the same locations as the original bulk copy operation [call]), each data mover task including a read phase and a write phase (Col. 1, lines 45-58, a copy operation/request includes a read command and a write command); executing each data mover task in the set by executing each phase of each particular task of the set of data mover tasks by issuing memory access commands corresponding to the phase of the particular one of the set of data mover tasks (Col. 1, lines 45-58, Col. 16, lines 3-6, each of the requests [tasks] within the bulk copy operation copies a portion of data, which entails a read command and a write command [phases]). Murphy, He, and Taylor are analogous art because they are in the same field of endeavor, that being data movement management. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Murphy in view of He to further include the partitioning of large operations into requests according to the teachings of Taylor. The motivation for doing so would have been to increase operational throughput by allowing the execution of smaller data requests in parallel according to the capabilities of the system (Taylor, Col. 19, lines 44-60). Murphy in view of He, further in view of Taylor does not explicitly teach performing write combining by passing, via a write data exporting buffer of a first memory interface slice, a partial write from a first data mover task to a second memory interface slice, wherein the second memory interface slice merges the partial write with a write from a second data mover task to reduce a number of fabric protocol operations. However, Teh teaches performing write combining by passing, via a write data exporting buffer of a first memory interface slice, a partial write from a first data mover task to a second memory interface slice, wherein the second memory interface slice merges the partial write with a write from a second data mover task to reduce a number of fabric protocol operations (Paragraph 64; Fig. 1, sending [first] write command data from an AXI interface stored in write data & strobe FIFO buffer 27 to be merged with existing [second] write command data, which is subsequently sent to the DFI interface). Murphy, He, Taylor, and Teh are analogous art because they are in the same field of endeavor, that being data movement management. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Murphy in view of He, further in view of Taylor to further include the write combining according to the teachings of Teh. The motivation for doing so would have been to reduce the number of commands issued to memory and thus reserve command bandwidth for other commands (Teh, Paragraph 78). Regarding claim 2, Murphy in view of He, further in view of Taylor and Teh teaches the method of claim 1, wherein the data mover call comprises a first memory location, a stride amount, an element size, and a number of elements (Murphy, Paragraphs 28, 31; Fig. 4, a strided request [call] includes a base address [first memory location], a stride [amount], an indication of the size of elements, and an indication of a number of elements), and wherein the set of data mover tasks access a plurality of memory locations indicated by the number of elements and starting with the first memory location and incrementing by the stride amount (Murphy, Paragraphs 28-29, 31; Fig. 4, strided requests comprise storing elements at locations starting at a base address [first memory location] and adding [incrementing] a stride to the base address). Regarding claim 3, Murphy in view of He, further in view of Taylor and Teh teaches the method of claim 1, wherein the data mover call comprises a location of a list of addresses (Murphy, Paragraphs 28-29; Fig. 4, address based request 440 [call] includes a list of addresses 454-1 to 454-N) and wherein one of the set of data mover tasks comprises fetching the list of addresses (Murphy, Paragraph 28; Fig. 4, as part of a request, traversing the data structure 440 to obtain [fetch] the list of addresses). Regarding claim 4, Murphy in view of He, further in view of Taylor and Teh teaches the method of claim 1, wherein the data mover call comprises a first location and a second location storing a list of offsets (Murphy, Paragraphs 28, 30; Fig. 4, offset based request 444 [call] includes a base address 458 [first location] and a list of offset indices 460-1 to 460-N [at a second location]) and wherein one of the set of data mover tasks comprises fetching the list of offsets (Murphy, Paragraph 28; Fig. 4, as part of a request, traversing the data structure 444 to obtain [fetch] the list of offset indices). Regarding claim 5, Murphy in view of He, further in view of Taylor and Teh teaches the method of claim 1, wherein the data mover tasks comprise a task to read from one contiguous memory location and write data into one to many contiguous memory locations (Taylor, Col. 1, lines 45-58, Col. 16, lines 28-43; Fig. 6, requests [tasks] include copying [reading] data from contiguous source area 510 [and writing] to contiguous target locations T1, T2, T3, and T4), a task to read from one to many memory locations and write data into one contiguous memory location (Murphy, Paragraph 18; Fig. 2, gathering [reading] data from scattered locations in memory portion 214 [and writing] into contiguous cache line 216), and a task to write into one to many memory locations (Murphy, Paragraphs 18, 21; Fig. 2, storing [writing] the data back into the scattered locations within memory portion 214). Regarding claim 6, Murphy in view of He, further in view of Taylor and Teh teaches the method of claim 1, wherein the memory interface slices are commanded to start the memory commands at a same time (He, Paragraph 40; Fig. 2, job processors 201 [memory interface slices] handles commands in parallel [same time]). Regarding claim 7, Murphy in view of He, further in view of Teh teaches the method of claim 1 and the memory controller executing data mover calls from a host (Murphy, Paragraphs 25-26; Fig. 3, vaults 324 (including vault [memory] controllers 328) receive I/O requests [calls] from requesting device 302). Murphy in view of He, further in view of Teh does not explicitly teach further comprising: concurrently executing, by the memory controller, a second data mover call from the host at a same time as executing the first data mover call. However, Taylor teaches further comprising: concurrently executing, by the memory controller, a second data mover call from the host at a same time as executing the first data mover call (Col. 17, lines 34-36, requests are issued [executed] in parallel [concurrently]). Murphy, He, Teh, and Taylor are analogous art because they are in the same field of endeavor, that being data movement management. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Murphy in view of He, further in view of Teh to further include the parallel execution according to the teachings of Taylor. The motivation for doing so would have been to increase operational throughput by executing requests in parallel. Regarding claim 8, this is a memory controller device version of the claimed method discussed above (claim 1, respectively), in which Murphy in view of He, further in view of Taylor and Teh also teaches a memory controller device (Murphy, Paragraph 25; Fig. 3, vault controller 328 is a hardware logic unit which functions analogously to a memory controller). The remaining claim limitations have been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 9, this is a memory controller device version of the claimed method discussed above (claim 2, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 10, this is a memory controller device version of the claimed method discussed above (claim 3, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 11, this is a memory controller device version of the claimed method discussed above (claim 4, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 12, this is a memory controller device version of the claimed method discussed above (claim 5, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 13, Murphy in view of He, further in view of Taylor and Teh teaches the memory controller device of claim 8, wherein the operations further comprise classifying the data mover call into a selected category of one of a plurality of predetermined categories (Taylor, Col. 5, lines 21-28, classifying operations [calls] into read or write operations); and wherein creating the set of data mover tasks based upon the data mover call comprises utilizing the selected category to determine a launch rate, a slice interleaving policy, or an allocation policy (Taylor, Col. 5, lines 21-28, Col. 9, lines 49-59; Fig. 2, a WUT (write) command characterizes [determines] a data movement operation including a source and target area [allocation policy] resulting in one or more data requests [tasks]). Regarding claim 14, Murphy in view of He, further in view of Taylor and Teh teaches the memory controller device of claim 8, wherein a data mover task includes a fetch phase (Murphy, Paragraph 28; Fig. 4, as part of a request, traversing a data structure 440/444 to obtain [fetch] information within the request). Regarding claim 15, this is a non-transitory machine-readable medium version of the claimed method discussed above (claim 1, respectively), in which Murphy in view of He, further in view of Taylor and Teh also teaches a non-transitory machine-readable medium storing instructions (Murphy, Paragraph 34, a non-transitory computing device readable medium storing executable instructions). The remaining claim limitations have been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 16, this is a non-transitory machine-readable medium version of the claimed method discussed above (claim 2, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 17, this is a non-transitory machine-readable medium version of the claimed method discussed above (claim 3, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 18, this is a non-transitory machine-readable medium version of the claimed method discussed above (claim 4, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 19, this is a non-transitory machine-readable medium version of the claimed method discussed above (claim 5, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Regarding claim 20, this is a non-transitory machine-readable medium version of the claimed memory controller device discussed above (claim 13, respectively), wherein all claim limitations have also been addressed and/or covered in the cited areas as set forth above. Thus, accordingly, this claim is also obvious over Murphy in view of He, further in view of Taylor and Teh. Response to Arguments Applicant’s arguments (see pages 8-10 of the remarks) filed 1/21/26 with respect to the rejections of claims 1, 8, and 15 under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Murphy, He, Taylor, and Teh. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jason Pinga whose telephone number is (571) 272-2620. The examiner can normally be reached on M-F 8:30am-6pm ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla, can be reached on (571) 272-1077. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.P./Examiner, Art Unit 2137 /Arpan P. Savla/Supervisory Patent Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Jul 18, 2024
Application Filed
Jul 21, 2025
Non-Final Rejection — §103
Oct 23, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103
Jan 21, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585391
MANAGING ALLOCATION OF HALF GOOD BLOCKS
2y 5m to grant Granted Mar 24, 2026
Patent 12572276
DATA COMPRESSION METHODS FOR BLOCK-BASED STORAGE SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12511072
STORAGE DEVICE AND AN OPERATING METHOD OF A STORAGE CONTROLLER
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
1y 11m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month