Prosecution Insights
Last updated: April 19, 2026
Application No. 18/601,259

MULTI-DIE DOT-PRODUCT ENGINE TO PROVISION LARGE SCALE MACHINE LEARNING INFERENCE APPLICATIONS

Non-Final OA §102§112
Filed
Mar 11, 2024
Examiner
DINH, PAUL
Art Unit
2851
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Hewlett Packard Enterprise Development LP
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
936 granted / 1047 resolved
+21.4% vs TC avg
Minimal +4% lift
Without
With
+3.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
19 currently pending
Career history
1066
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
8.6%
-31.4% vs TC avg
§102
39.4%
-0.6% vs TC avg
§112
23.4%
-16.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1047 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . OFFICE ACTION This is a response to the application filed on 3/11/2024. Claims 11-20 are pending. Objection In claim 11, the first occurrence of acronym “DPE” must be spelled out. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 11-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 11 is rejected because the limitation “a first chip”, “a second chip” and “a third chip” are not clearly supported by the disclosure. Claim 11 is rejected because the limitation the claimed word “waiting“ in the limitation a third chip waiting during a successive interval” is not clearly supported by the disclosure. Claims 12-20 are rejected because they depend directly or indirectly from claim 11. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) The claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) The claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. INSOFAR THE LIMITATION ARE UNDERSTOOD AND GIVENT BROADEST REASONABLE ONTERPRETATION Claims 11-20 are rejected under 35 U.S.C. 102(a) (2) being anticipated by the prior art of record Appu (US 2021/0103550). Regarding claim1, the prior art discloses: A method of pipelining data (see data pipeline, data processing, data transferring/throughput in one or more of fig 2-8, 12-16-17, 22-28, 31-34, 38) to multiple tiles (fig 13, 16-17, 24-26, 28-29, 34, 38) of a multi-chip interface (see chipset interface, chiplet interface, muti-chip interface, chip array interface in one or more of fig 2, 4, 6-8, 13-16, 24-28, 34, 38, 41), comprising: initiating an inference operation (see inferencing system/ performing/ operation/ platform/ functionality/ processor/ algorithms/ computation in one or more of par 18, 101, 103, 170, 175, 184, 204, 213-221, 383, 398) ; initiating a pipeline (see data pipeline, data processing, data transferring in one or more of fig 2-8, 12-16-17, 22-28, 31-34, 38) associated with the inference operation spanned across a plurality of DPE (see dot product instructions/ logic/ operations/ processing/ elements / calculations in one or more of par 43-44, 51, 102, 184, 293, 303, 358, 362-375, 402-430, 462-468) chips of the multi-chip interface, wherein the pipeline comprises a plurality of consecutive intervals (consecutive/ successive intervals in terms of one or more of, i.e.: sparse data, bit streams, command sequence (abstract, fig 22, 31-34, 37), transferred data can be stored to on-chip memory (e.g., parallel processor memory 222) during processing, then written back to system memory (fig 2), pipeline manager 232 can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar (fig 2); processing can be performed over consecutive clock cycles (fig 2); throughput communication protocol (fig 4), Data are updated/compressed and transferred between nodes (fig 12); pipeline implement media operations via requests (par 245), Execution is multi-issue per clock to pipelines (par 278); pipelines send thread initiation requests to thread execution logic 1800 via thread spawning and dispatch logic (par 283), synchronization instructions (e.g., wait, send) (par 303), In response to a pipeline flush, the command parser for the graphics processor will pause command processing (par 319), command sequence differ based on the active pipeline for operations (par 323), pipeline state commands may also be able to selectively disable or bypass certain pipeline elements (par 324, 327, 404, 414), Command execution may be triggered using a pipeline synchronization command to flush the command sequence (par 326), Pipe line Commands are queued/triggered (par 329), number of operations vary in throughput (par 373), Data Aware Sparsity with Compression (par 376), instructions and operands to perform successive dot product operations (par 402), for a set of consecutive dependent dot product instructions, the instructions are executed in-order, such that a first dot product instruction is completed before a subsequent dependent instruction enters the execution pipeline (par 428)); each of the multiple tiles requesting data (see one or more of par 68, 72, 74, 80, 137, 140, 149, 158, 161, 245, 251, 261, 277-278, 283, 294, 383, 457) during an interval (see above mention interval); and as the pipeline advances, a first tile of the multiple tiles on a first chip (a first one of the chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1, 13, 15, 24-26) performing a computation for an inference operation on requested data (see above mention inferencing and requesting data above) and other tiles of the multiple tiles on the first chip, and other tiles of the multiple tiles on a second chip (a second one of the chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1, 13, 15, 24-26, that is different from the first chip )) and other tiles of the multiple tiles on a third chip (a third one of the chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1, 13, 15, 24-26, that is different from the first and second chips)) waiting during a successive interval (see waiting in one or more of par 278, 303, and see consecutive/ successive interval(s) in terms of one or more examples above) (Claim 12) wherein the multiple tiles are spanned across multiple chips of the multi-chip interface, comprise: the first chip of the multi-chip interface having corresponding tiles from the multiple tiles thereon; the second chip of the multi-chip interface having corresponding tiles from the multiple tiles thereon; and the third chip of the multi-chip interface having corresponding tiles from the multiple tiles thereon ( see tiles and chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1-3, 13, 15-16, 24-28) (Claim 13) as the pipeline further advances, the first tile of the multiple tiles on the first chip completing a computation for an inference operation on requested data, a second tile of the multiple tiles on the first chip initiating another computation for an inference operation on the requested data, and other tiles of the multiple tiles on the second chip and other tiles of the multiple tiles on the third chip waiting during a successive interval thereon ( see tiles and chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1-3, 13, 15-16, 24-28. For waiting, see par 278, 303). (Claim14) wherein the first tile halts (execution resources may be enabled or disabled/waited /start/ stopped/ bypassed as needed (par 272, 276, 278, 294, 303, 309-311, 401-404)) allowing an output from the inference operation to be sent to an host interface (see one or more of par 61-62, 131-132, 155-162, 180, 183, 185, 247, 265, 267) of the multi-chip interface (Claim 15) as the pipeline further advances, the second tile of the multiple tiles in the first chip completing the computation for an inference operation on the requested data (see inferencing and requesting mentioned above), and a first tile of the multiple tiles on the second chip initiating a computation for an inference operation on the requested data during the successive interval, and the other tiles of the multiple tiles on the second chip and other tiles of the multiple tiles on the third chip waiting during a successive interval ( see tiles and chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1-3, 13, 15-16, 24-28, 34) (Claim 16) as the pipeline further advances, the first tile of the multiple tiles on the first chip completing the computation for an inference operation on the requested data (see inferencing and requesting mentioned above), and a second tile of the multiple tiles on the second chip initiating a computation for an inference operation on the requested data during the successive interval, and the other tiles of the multiple tiles on the third chip waiting during a successive interval ( see tiles and chipset/ chiplets/ multi-chip package/ multichip modules in one or more of fig 1-3, 13, 15-16, 24-28, 34) (Claim 17) wherein an output tile of the multi-chip interface executes a send instruction to send the output from the inference operation (see inferencing system/ performing/ operation/ platform/ functionality/ processor/ algorithms/ computation in one or more of par 18, 101, 103, 170, 175, 184, 204, 213-221, 383, 398) to the host interface (see one or more of par 61-62, 131-132, 155-162, 180, 183, 185, 247, 265, 267) (Claim 18) wherein the output tile of the multi-chip interface, in response to the send instruction, further executes a barrier instruction to stall (i.e., stall execution (par 421), execution resources may be enabled or disabled as needed (par 272), waiting for data (par 278), logic to start/stop/disable (par 294, 309, 324, 384, 404), wait synchronization (par 303), pause command processing (par 319)) the output tile during sending the output from the inference operation to the host interface. (Claim 19) ) wherein the send instruction and the barrier instruction is in accordance with a fabric protocol (see one or more of: communication/ link/ interface protocol, interconnect fabric/protocol, fabric/protocol layers, inter-thread communication, fabric link/ communication, channel expansion, channel convolution (fig 1, 3-4, 7-9, 16-18, 24, 38, 41). (Claim 20) wherein the first chip, the second chip, and the third chip of the multi-chip interface comprises an ASIC (see one or more of par 60, 233, 361-362, 455) Correspondence Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL DINH whose telephone number is 571-272-1890. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s Supervisor, Jack Chiang can be reached on 571-272-7483. The fax number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197(toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL DINH/Primary Examiner, Art Unit 2851
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
Mar 21, 2024
Response after Non-Final Action
Jan 15, 2026
Non-Final Rejection — §102, §112
Apr 10, 2026
Interview Requested
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596300
SYSTEM AND METHOD FOR PERFORMING LOCAL CDU MODELING AND CONTROL IN A VIRTUAL FABRICATION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581745
INTEGRATED CIRCUIT AND SYSTEM FOR FABRICATING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12572835
QUANTUM DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12566911
MACHINE LEARNING TOOL FOR LAYOUT DESIGN OF PRINTED CIRCUIT BOARD
2y 5m to grant Granted Mar 03, 2026
Patent 12562603
ELECTROSTATIC SHIELD FOR WIRELESS SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
93%
With Interview (+3.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1047 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month