Prosecution Insights
Last updated: April 19, 2026
Application No. 18/484,443

HARDWARE DISTRIBUTED ARCHITECTURE IN A DATA TRANSFORM ACCELERATOR

Non-Final OA §103
Filed
Oct 10, 2023
Examiner
HUYNH, KIM T
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
MaxLinear, Inc.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
580 granted / 703 resolved
+27.5% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
24 currently pending
Career history
727
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
37.1%
-2.9% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 703 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/2/2026 has been entered. Claim Rejections - 35 USC § 103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Matthews et al. (US Patent No. US20277518) in view of Jones et al. (US Pub. No. US20120269196) As per claim 1, Matthews discloses a method, comprising: obtaining data to process using at least one data transform operation (col. 21, lines 9-11, packets arrived the processing components configured to perform a packet processing operation (e.g. 1st or one of data transform operation) to the packet.)), the at least one data transform operation relating to at least one of: data compression, decompression, encryption, decryption, authentication tag (MAC) generation, authentication, data deduplication hash generation, and non-volatile memory express (NVMe) protection information (PI) generation, NVME PI (col.19, line 55-col.20, line 2, packet processing logic may similarity be configured to send some or all of the visibility packets to an outgoing interface such as an ethernet port, external CPU, sideband interface and so forth) verification. (col.18, lines 25-32, a visibility component 160 may automatically produce logs or notifications (e.g., verification) based on the visibility packets 105.)) determining a processing path for the data to traverse (col.19, lines 40-50, visibility queues provided to store packets containing visibility tags. the original packet follows its normal path, as well as traverses the visibility path (e.g., for non-terminal events such as non-critical delay monitoring) at least a first data transform engine (fig.1, packet processing components 150A) and a second data transform engine (fig.1, packet processing components 150B); directing the data to the first data transform engine, the first data transform engine to perform a first data transform operation on the data. (col.21, lines 9-11, one or more packet processing components individually connected to a system communication channel, the one or more packet processing components individually configured to perform a packet processing operation to the packet); and directing the data to the second data transform engine, the second data transform engine to perform a second data transform operation (col. 21, lines 9-11, packets arrived the processing components configured to perform a packet processing operation (e.g. 2nd data transform operation) to the packet.)) on the data. (col.22, lines 57-67, wherein the one or more packet processing components are individually configured to direct the packet to a next component using the processing path) Mattews disclose all the limitations as the above but does not explicitly disclose “a method comprising: obtaining data to process using at least one algorithmic data transform operation, wherein each of the first data transform engine and the second data transform engine performs a respective one of said algorithmic data transform operations.” However, Jones discloses this. (paragraph 61, Data processor of the device operate on data contents from the first or second party that a customized protocol may be used for the data transmission. Attach specific data intended for a particular client to data that every client receives. If the specific data is sensitive, the specific data may be encrypted and only the specific client knows the corresponding decryption key.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Jones with the teaching of Mathews so as to provide a method and device that can manage traffic efficiently for the provision of multimedia services so as to yield the predicatable result so as to control efficiently, thus enhance the system performance. As per claim 2, Matthews discloses wherein the at least one data transform operation further relating to real-time verification. (col.18, lines 25-32, the visibility component 160 may receive such visibility packets 105 in real-time, A visibility component 160 may automatically produce logs or notifications (e.g., verification) based on the visibility packets 105.)) As per claim 3, Matthews discloses the first data transform engine being implemented using hardwired logic, the second data transform engine being implemented as a programmable data transform engine. (col.39, lines 18-22, computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices, or any other device that incorporates hard-wired and/or program logic to implement the techniques.) As per claim 4, Matthews discloses the first data transform engine and the second data transform engine being implemented on a single die. (col.18, lines 8-12, the visibility component 160 may be a sidecar component (either inside the chip or attached to the chip) that may not have the ability to directly modify packets in-flight, but may collect state and/or generate instructions (such as healing actions) based on observed state.) As per claim 5, Matthews discloses the first data transform engine being implemented on a first chiplet, and the second data transform engine being implemented on a second chiplet. (col.18, lines 8-12, the visibility component 160 may be a sidecar component (either inside the chip or attached to the chip) that may not have the ability to directly modify packets in-flight, but may collect state and/or generate instructions (such as healing actions) based on observed state.) As per claim 6, Matthews discloses wherein the data is directed to the first data transform engine or to the second data transform engine based on a user input (col.41, lines 33-63, another type of user input device 814 is cursor control 816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812.) As per claim 7, Matthews discloses wherein directing the data to the first data transform engine comprises directing a first subset of the data to the first data transform engine, the first data transform engine to perform the first data transform operation on the first subset of the data, wherein directing the data to the second data transform engine comprises directing a second subset of the data to the second data transform engine, the second data transform engine to perform the second data transform operation on the second subset of the data. (col.16, lines 1-13, the visibility component 160 is an inline component inside device 100 that operates on information parameters, indicating when duplicate packets should be made or updating statistics before transmitting the packet may not have the ability to directly modify packets in-flight, but may collected state and/or generate instruction (such as heading actions based on observed state.) As per claim 8, Matthews discloses wherein the data is directed to the first data transform engine based on a particular transform command received from a host controller (fig.1, visibility component 160), the first data transform engine being configured to execute the particular transform command. (col.18, lines 1-6, the visibility component 160 is an inline component inside device 100 that operates on information provided by the traffic manager to determine next visibility action such as without limitation, annotating packets, reconfiguring device parameters, indicating when duplicate packets should be made or updating statistics.) As per claim 9, Matthews discloses the particular transform command including at least one control information or metadata that is used to direct the data to the first data transform engine. (col.6, lines 34-37, addressing information, flags, labels and other metadata used for determining how to handle a data unit is typically embedded within a portion of the data unit known as the header.) As per claim 10, Matthews discloses wherein the control information includes instructions from the host controller related to processing the data. (col.3, lines 52-55, the visibility component may analyze the packet before sending the packet out, and pass configuration instructions or other information back to a packet processor based on the packet.) As per claim 11, Matthews discloses wherein the metadata includes information on one or more types of transforms or transform commands to perform on the data. (col.6, lines 34-46, protocols allow for arbitrary numbers of fields, with some or all of the fields being preceded by type information that explains to a node the meaning of the field.) As per claim 12, Matthews discloses wherein determining the processing path for the data to traverse at least the first data transform engine and the second data transform engine includes determining a plurality of data transform commands to be performed based on the control information or the metadata. (col.19, lines 40-50, visibility queues provided to store packets containing visibility tags. Visibility packets may be linked to the visibility queue only (i.e. single path), when generated on account of certain terminal events (e.g. dropping). Or, visibility packets may be duplicated to the visibility queue (i.e. copied or mirrored) such that the original packet follows its normal path, as well as traverses the visibility path (e.g. for non-terminal events such as non-critical delay monitoring). As per claim 13, Matthews discloses wherein the data is received from a host controller, the method to be performed by an accelerator. (col.25, line 60-col.26, line 5, the special state of the queue may have various implications, such as accelerating the dequeuing of the queue, activating a background process that reads the packet links and frees up the associated buffers to accelerate a draining process, blocking additional enqueues to the queue, disabling traffic flow control or shaping, and so forth.) As per claim 14, Matthews discloses the method of claim further comprising causing the data to be provided to the host controller in response to a performance of the first data transform operation and the second data transform operation on the data. (col.25, line 60-col.26, line 5, the special state of the queue may have various implications, such as accelerating the dequeuing of the queue, activating a background process that reads the packet links and frees up the associated buffers to accelerate a draining process, blocking additional enqueues to the queue, disabling traffic flow control or shaping, and so forth. In other embodiment, activation of a special queue state may instead occur at other times, such during performance of a background threshold monitoring task.) As per claim 15, Matthews discloses wherein directing the data to the first data transform engine includes identifying the first data transform engine from a pool of data transform engine. (col.18, lines 47-50, the visibility component 160 may dynamically change configuration settings 115 of device 100 directly in response to observing visibility packets having certain characteristics.) As per claim 16, Matthews discloses wherein directing the data to the first data transform engine and directing the data to the second data transform engine comprises determining that no structural contention exists in an allocation of the first data transform engine and the second data transform engine. (col.18, lines 1-13, the visibility component 160 that operates on information provided by the traffic manager to determine next visibility actions such as, without limitation, annotating packets, reconfiguring device parameters, indicating when duplicate packets should be made, or updating statistics, before transmitting the packet to ports.) As per claim 17, Matthews discloses wherein directing the data to the first data transform engine includes: determining a criteria for a transform command associated with the first data transform engine; (col.18, lines 1-13, the visibility component 160 that operates on information provided by the traffic manager to determine next visibility actions such as, without limitation, annotating packets, reconfiguring device parameters, indicating when duplicate packets should be made, or updating statistics, before transmitting the packet to ports.) and selecting the first data transform engine based on the determined criteria for the transform command exceeding a threshold priority. (col.21, lines 14-16, certain types of packets are deemed visibility ineligible. For example, the user may only want to have visibility on certain high priority flows.) As per claim 18, Matthews discloses the determined criteria indicating that the transform command has a higher priority than a second transform command, the second transform command being in a hold state. (fig.3 and col.21, lines 12-19, block 315 comprise determining that a packet is eligible for visibility processing. For various reasons, certain types of packets are deemed visibility ineligible, user may only want to have visibility on certain high priority flows. As another example, the incoming packet may be a visibility packet from an upstream device that the receiving device may thus elect not to perform additional visibility processing on.) As per claim 19, Matthews discloses the determined criteria being related to a power profile, or to a realization of a service level agreement for commands belonging to different classes of commands. (fig.3 and col.21, lines 12-22, detecting an expiration tag in the dequeued packet, receiving a signal from queue management logic, reading a status of the queue, comparing a queue delay to an expiration delay threshold, and so forth.) As per claim 20, Matthews discloses a data accelerator (col.25, line 60-col.26, line 5, the special state of the queue may have various implications, such as accelerating the dequeuing of the queue, activating a background process that reads the packet links and frees up the associated buffers to accelerate a draining process, blocking additional enqueues to the queue, disabling traffic flow control or shaping, and so forth.) comprising: an interface connection (e.g., ports) to obtain data to be processed from a host controller (fig.1, visibility component 160), (col.19, line 65-col.20, line 2, packet processing logic may similarly be configured to send some or all of the visibility packets to an outgoing interface, such as an Ethernet port, external CPU, sideband interface and so forth.); one or more data transform engines individually configured to perform a specific at least one data transform operation to the data (fig.3 and col.20, lines 59-67, each of the processes described in connection with the functional blocks described below may be implemented using one or more computer programs, other software elements, and/or digital logic in any of a general-purpose computer or a special-purpose computer, while performing data retrieval, transformation, and storage operations that involve interacting with and transforming the physical state of memory of the computer.), the at least one data transform operation relating to at least one of: data compression, decompression, encryption, decryption, authentication tag (MAC) generation, authentication, data deduplication hash generation, and non-volatile memory express (NVMe) protection information (PI) generation, NVME PI (col.19, line 55-col.20, line 2, packet processing logic may similarity be configured to send some or all of the visibility packets to an outgoing interface such as an ethernet port, external CPU, sideband interface and so forth) verification. (col.18, lines 25-32, a visibility component 160 may automatically produce logs or notifications (e.g., verification) based on the visibility packets 105.)); and a queueing system to determine a processing path of the data from the interface connection and through the one or more data transform engines, wherein the one or more data transform engines are individually configured to direct the data to a next engine using the processing path (fig.3, col.22, lines 57-67, the packet may be processed that is associated with the queue to which the packet was assigned. The processing may involve, for instance, determining where to send the packet next, manipulating the packet, dropping the packet, adding information to the packet, or any other suitable processing steps. As a result of the processing, the packet will typically be assigned to yet another queue for further processing, thereby returning to block 320, or forwarded out of the device to a next destination.) Mattews disclose all the limitations as the above but does not explicitly disclose “a method comprising: obtaining data to process using at least one algorithmic data transform operation, wherein each of the first data transform engine and the second data transform engine performs a respective one of said algorithmic data transform operations.” However, Jones discloses this. (paragraph 61, Data processor of the device operate on data contents from the first or second party that a customized protocol may be used for the data transmission. Attach specific data intended for a particular client to data that every client receives. If the specific data is sensitive, the specific data may be encrypted and only the specific client knows the corresponding decryption key.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Jones with the teaching of Mathews so as to provide a method and device that can manage traffic efficiently for the provision of multimedia services so as to yield the predicatable result so as to control efficiently, thus enhance the system performance. Response to Amendment 4. Applicant's amendment filed on 1/2/2026 have been fully considered but are moot in view of the new ground(s) of rejection. 5. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Kutch et al. [Pub. No. US2021/0117360] discloses computing architecture with targeted acceleration capabilities may be needed to achieve the target performance for complex networking workloads. Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM T HUYNH whose telephone number is (571)272-3635 or via e-mail addressed to [kim.huynh3@uspto.gov]. The examiner can normally be reached on M-F 7.00AM- 4:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tsai Henry can be reached at (571)272-4176 or via e-mail addressed to [Henry.Tsai@USPTO.GOV]. The fax phone numbers for the organization where this application or proceeding is assigned are (571)273-8300 for regular communications and After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (571)272-2100. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K. T. H./ Examiner, Art Unit 2184 /HENRY TSAI/ Supervisory Patent Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Oct 10, 2023
Application Filed
Mar 11, 2025
Non-Final Rejection — §103
Jun 18, 2025
Response Filed
Sep 25, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602342
SMALL FORM FACTOR PC WITH BMC AND EXTENDED FUNCTIONALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12591490
SEMICONDUCTOR DEVICE AND LINK CONFIGURING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12585608
ARCHITECTURE TO ACHIEVE HIGHER THROUGHPUT IN SYMBOL TO WIRE STATE CONVERSION
2y 5m to grant Granted Mar 24, 2026
Patent 12585607
BUS MODULE AND SERVER
2y 5m to grant Granted Mar 24, 2026
Patent 12579087
IN-BAND INTERRUPT SIGNAL FOR A COMMUNICATION INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
91%
With Interview (+8.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 703 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month