Prosecution Insights
Last updated: April 19, 2026
Application No. 18/107,980

METHOD AND APPARATUS FOR OFFLOADING MEMORY/STORAGE SHARDING FROM CPU RESOURCES

Non-Final OA §102§112
Filed
Feb 09, 2023
Examiner
BLUST, JASON W
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
220 granted / 277 resolved
+24.4% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
24 currently pending
Career history
301
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
23.8%
-16.2% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 277 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 2- 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph , as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2-7 recite the limitation "the logic circuitry". There is insufficient antecedent basis for this limitation in the claim. For purposes of applying prior art, it is assumed that “the logic circuitry” is referring to “circuitry” in claim 1. Claim 8 recites “ logic associated with the processing core and software, ASIC block, and/or FPGA and firmware is to perform ”, it is unclear to what the logic refers to and whether it is internal/external to the processing core, ASIC block, and/or FPGA”. The examiner recommends using the words “coupled” or “connected” instead of associated. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bachmutsky (US 2022/0222010 as listed on the IDS filed 6/13/2024). In regards to claim 1 , Bachmutsky teaches An apparatus comprising, comprising: an ingress path to receive a memory and/or storage access request generated by a central processing unit (CPU); an egress path to direct a response to the access request to the CPU; (fig. 7, switch 700 with ingress and egress buffers to receive/send messages/data. ¶26 an application (i.e. from a CPU of a server, see fig. 2) sends (i.e. via ingress path) memory requests to the switch, and the switch responds to the request by supplying the requested data to the application (i.e. to a CPU of a server via an egress path) circuitry coupled to the ingress path and the egress path, (fig. 2, 7, switch) the circuitry to divide the access request into multiple access requests and direct the multiple access requests toward a network, the circuitry to receive respective multiple responses to the multiple access requests and construct the response. (¶26 the switch can split a request into N separate requests, each going to a different memory pool of the network, the separate responses are then aggregated into a single response back to the application) In regards to claim 2 , Bachmutsky further teaches wherein the logic circuitry is to refer to information that defines which memory and/or storage addresses are to have their memory and/or storage access requests sharded. ( ¶50, fig. 4 , the switch maintains an interleaving configuration table 438 that includes an address range field 442 a list of pools 444 and an interleave ID field 440 that identifies a corresponding range in global memory space that are associated with certain interleaving configurations that define which memory pools are employed for different memory regions and the associated type of memory allocation interleaving applied. Fig. 6, step 604 also uses the system address decoder to identify what interleaving range a request belongs to.) In regards to claim 3 , Bachmutsky further teaches wherein the information is to be stored in memory that is coupled to the logic circuitry. (fig. 7, ¶78, the request table 428, configuration table 438, and system address decoder 418 are stored in memory 716 of the switch) In regards to claim s 4-5 , Bachmutsky further teaches wherein the logic circuitry is to construct an in flight record for the multiple access requests. wherein the logic circuitry is to delete the record as a consequence of the respective multiple responses having been received. (¶49, fig. 4, pending queue logic 408 maintains data in a request table 428 to track pending responses (i.e. an in flight record of the multiple access requests that are then used to gather the data from the multiple responses and combine them into a single response returned to the originator. Once the response is sent the originator, then the multiple requests would no longer be considered pending and therefore removed form the request table 428 used by the pending queue logic 408). In regards to claim 6 , Bachmutsky further teaches wherein, if the memory and/or storage access request is a write request, the logic circuitry is to manipulate the address of the write request to generate a different, unique address for each of the multiple access requests. (fig. 6, ¶61-69 teaches for a received write request (step 602) the decoder can identify the interleaving ranges (i.e. addresses of the different pools to send the multiple requests to), and then can issue either a multicast to all the memory pools to write the data to their indicated memory addresses, or multiple unicast requests with addresses (i.e. different, unique addresses) can be sent to the applicable memory node/pools) In regards to claim 7 , Bachmutsky further teaches if the memory and/or storage access request is a read request, the logic circuitry is to receive portions of read data with the respective multiple responses and combine the portions of data into complete read data. (¶26, ¶67, fig. 6, for a read request (steps 616, 618) the read responses are gathered from the memory nodes and the data portions received from each memory pool is aggregated (i.e. combined into complete read data) and returned to the requestor) In regards to claim 8 , Bachmutsky teaches An infrastructure processing unit, comprising: a) a processing core; b) an ASIC block and/or a field programmable gate array (FPGA); c) at least one machine readable medium having software to execute on the processing core and/or firmware to program the FPGA; wherein, logic associated with the processing core and software, ASIC block, and/or FPGA and firmware is to perform i) through vi) below: (¶76-78, fig. 7, CPU/IPU 714, with programmable logic FPGA 720 and/or preprogrammed logic, ASICs , and firmware 718 storage (machine readable medium) that contains instructions/modules executed by the CPU/IPU to implement the FPGA 720 and other associated functions) i) receive a memory and/or storage access request generated by a central processing unit (CPU); ii) divide the access request into multiple access requests; iii) direct the multiple access requests to a network; iv) receive respective multiple responses to the multiple access requests that were sent to the IPU from the network; v) construct a response to the access request from the respective multiple responses; and vi) send the response to the CPU. (fig. 7, switch 700 with ingress and egress buffers to receive/send messages/data. ¶26 an application (i.e. from a CPU of a server, see fig. 2) sends (i.e. via ingress path) memory requests to the switch, and the switch responds to the request by supplying the requested data to the application (i.e. to a CPU of a server via an egress path). T he switch can split a request into N separate requests, each going to a different memory pool of the network, the separate responses are then aggregated into a single response back to the application (i.e. response sent back to the requesting CPU ) In regards to claims 9-14 , the claims are substantially similar to respective claims 2-7 and rejected for the same reasons as detailed above. In regards to claim 15, Bachmutsky teaches A computing system, comprising: (fig. 8B, system 820) a) a network; (fig. 8B, ¶8 0-81 , interconnection links 826 , and switches 818 that are interconnected to the memory and storage pools and compute drawers of the racks (i.e. a network ) b) a memory pool coupled to the network; (fig. 8B, ¶81, memory pools in pooled memory drawers 812 and pooled memory sled 816) c) a storage pool coupled to the network; (fig. 8B, ¶81, storage resources in pooled storage drawers 808) d) a plurality of central processing units (CPUs) coupled to the network; (fig. 8B, ¶81, compute resources in pooled compute drawers 806 (plurality of CPUs) e) circuitry to perform i) through vi) below: (fig. 8B, ¶80, switch 818) i) receive a memory or storage access request from one of the CPUs; ii) divide the access request into multiple access requests; iii) cause the multiple access requests to be sent to the memory pool or storage pool over the network; iv) receive respective multiple responses to the multiple access requests that were sent to the circuitry by the memory pool or storage pool over the network; v) construct a response to the access request from the respective multiple responses; and vi) send the response to the CPU. (fig. 7, switch 700 with ingress and egress buffers to receive/send messages/data. ¶26 an application (i.e. from a CPU of a server, see fig. 2) sends (i.e. via ingress path) memory requests to the switch, and the switch responds to the request by supplying the requested data to the application (i.e. to a CPU of a server via an egress path). The switch can split a request into N separate requests, each going to a different memory pool of the network, the separate responses are then aggregated into a single response back to the application (i.e. response sent back to the requesting CPU) In regards to claim 16, Bachmutsky further teaches wherein the circuitry is within the network. (fig. 8B, ¶80-81 switches 818 (circuitry) are part of the “network”) In regards to claim 17, Bachmutsky further teaches wherein the circuitry is between the CPU and the network. (fig. 8B, ¶80-81 switches 818 (circuitry) are between the CPUs 806 and the other part of the “network” links 826 connecting the memory/storage resources ) In regards to claim 18, the claim is substantially similar to respective claims 2 and rejected for the same reasons as detailed above. In regards to claim s 19-20, the claims are substantially similar to respective claims 4-5 and rejected for the same reasons as detailed above. EXAMINER’S NOTE Examiner has cited particular paragraphs, figures, and/or columns and line numbers in the references applied to the claims above for the convenience of the Applicants. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the Applicants in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Sorenson (US 2020/0012684) discloses splitting requests to multiple memory pools and aggregating the responses (see fig. 2) . Riahi (US 2019/0129640) discloses deconstructing data in chuncklets to create warplets and store the warplets, which can later be retrieved and assembled into the original data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT JASON W BLUST whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-6302 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 12-8:30 EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Hosain Alam can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-3978 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON W BLUST/ Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Feb 09, 2023
Application Filed
Mar 28, 2023
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596485
HOST DEVICE GENERATING BLOCK MAP INFORMATION, METHOD OF OPERATING THE SAME, AND METHOD OF OPERATING ELECTRONIC DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12554417
DISTRIBUTED DATA STORAGE CONTROL METHOD, READABLE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12535954
STORAGE DEVICE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12530120
Maximizing Data Migration Bandwidth
2y 5m to grant Granted Jan 20, 2026
Patent 12530118
DATA PROCESSING METHOD AND RELATED DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
96%
With Interview (+16.2%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 277 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month