Prosecution Insights
Last updated: April 19, 2026
Application No. 18/385,476

MAXIMIZING BANDWIDTH UTILIZATION BY SELECTING APPROPRIATE MODE OF OPERATION FOR PCIe CARD

Final Rejection §102
Filed
Oct 31, 2023
Examiner
LEWIS-TAYLOR, DAYTON A.
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
568 granted / 701 resolved
+26.0% vs TC avg
Minimal +3% lift
Without
With
+3.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
725
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 701 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1-20 are pending. 3. This office action is in response to the Applicant’s communication filed 12/31/2025 in response to PTO Office Action mailed 10/01/2025. The Applicant’s remarks and amendments to the claims and/or the specification were considered with the results that follow. Response to Arguments 4. Applicant’s argument with respect to amended independent claims has been fully considered but they are not persuasive. Applicant’s arguments are summarized as: 1) “Applicant respectfully asserts that Patil does not disclose “switching a mode of operation of said PCIe card based on said predicted bandwidth utilization of said PCIe link” as recited in claim 1 and similarly in claims 8 and 15.” 2) “Applicant further asserts that Patil does not disclose “predicating a bandwidth utilization of said PCle link based on said measured bandwidth utilization of said PCIe link using a machine learning model trained to predict bandwidth utilizations of PCIe links” as recited in claim 1 and similarly in claims 8 and 15.” As per argument 1, in response to applicant’s argument, Patil does not disclose “switching a mode of operation of said PCIe card based on said predicted bandwidth utilization of said PCIe link”, the Examiner respectfully disagrees. Patil discloses in par. [0035] and Fig. 1 that a first component interface 110 can receive a PCIe 4.0 card for establishing an asymmetric connection between the host, such as processor 102, and a GPU hardware accelerator, such as device 106. By using the PCIe 4.0 card, a first connection is defined between the host 102 (CPU 0) and the accelerator 106 with the use of a set of bus lanes of the data bus in par. [0061] of Patil. “Switching a mode of operation of the PCIe card based on said predicted bandwidth utilization of said PCIe link” is further established when each bus lane of the first component interface 110 is dynamically configurable which implies the switching/changing to be a data ingress lane for receiving ingress data provided to host 102 or a data egress lane for transmitting egress data provided by host 102 based on at least for component interface 110, the system host can use the software control loop 130 and the software agent to analyze the data transfer patterns and generate predictions indicating that the bandwidth requirements of a host-accelerator interface 110 are entirely or substantially asymmetric (Patil – par. [0058, 0061]). As per argument 2, in response to applicant’s argument, Patil does not disclose “predicating a bandwidth utilization of said PCle link based on said measured bandwidth utilization of said PCIe link using a machine learning model trained to predict bandwidth utilizations of PCIe links”, the Examiner respectfully disagrees. Patil discloses in par. [0035] and Fig. 1 that a first component interface 110 can receive a PCIe 4.0 card for establishing an asymmetric connection between the host, such as processor 102, and a GPU hardware accelerator, such as device 106, which implies that the first component interface 110 is of a PCIe standard. Furthermore, Patil (par. [0047-0048, 0050-0051]) discloses a connection between devices of system 100 can have a particular asymmetric bandwidth requirement that is based on predictions or inferences that are generated from analyzing data processing operations at system 100. The system host provides information describing observed data traffic at system 100 to a software agent managed by the software control loop 130. In some implementations, the software agent is represented by a data processing module that includes a trained machine learning model, such as a machine learning engine or statistical analysis engine. The data processing module is configured to analyze information describing data traffic at system 100. The software control loop 130 causes the trained software agent (or machine learning model) to monitor and analyze data traffic at one or more component interfaces of system 100. As a result, the teachings Patil could be used to implement the present claim limitation. Claim Rejections - 35 USC § 102 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 7. Claims 1, 4-8, 11-15 and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Patil et al. (US Pub. No. 2020/0371984 A1 hereinafter “Patil” – IDS Submission). Referring to claim 1, Patil discloses a computer-implemented method for maximizing bandwidth utilization of Peripheral Component Interconnect Express (PCIe) links, the method comprising: measuring a bandwidth utilization (Patil – Par. [0019] discloses implementing asymmetric configurations of bus lanes in data bus connections between devices of a system. The asymmetric configuration is based on asymmetric bandwidth requirements that are generated by a system host using inferences or predictions learned from analyzing data traffic patterns of the system. Predictive analysis of the traffic patterns can yield asymmetric bandwidth requirements that accurately reflect traffic flow at component interfaces of the system.) of a PCIe link involving a PCIe card (Patil – Par. [0035] discloses a first component interface 110 can receive a PCIe 4.0 card for establishing an asymmetric connection between the host, such as processor 102, and a GPU hardware accelerator, such as device 106.); predicating a bandwidth utilization of said PCIe link based on said measured bandwidth utilization of said PCIe link using a machine learning model trained to predict bandwidth utilizations of PCIe links (Patil – Par. [0047-0048, 0050, 0051] disclose a connection between devices of system 100 can have a particular asymmetric bandwidth requirement that is based on predictions or inferences that are generated from analyzing data processing operations at system 100. The system host provides information describing observed data traffic at system 100 to a software agent managed by the software control loop 130. In some implementations, the software agent is represented by a data processing module that includes a trained machine learning model, such as a machine learning engine or statistical analysis engine. The data processing module is configured to analyze information describing data traffic at system 100.); and switching a mode of operation of said PCIe card based on said predicted bandwidth utilization of said PCIe link (Patil – Fig. 2 & par. [0061] disclose a system 100 is operable to configure a first set of bus lanes of a first data bus based on the asymmetric bandwidth requirement of the first connection (208)…. The first set of bus lanes of the first data bus are configured to allocate a different number of the bus lanes in the first set of bus lanes to data egress from the host 102 than to data ingress to the host 102. For example, relative to the host 102, each bus lane can be dynamically configurable as a data ingress lane for receiving ingress data provided to host 102 or a data egress lane for transmitting egress data provided by host 102.). Referring to claim 4, Patil discloses the method as recited in claim 1 further comprising: determining if a mode of operation of said PCIe card at a time said bandwidth utilization of said PCIe link involving said PCIe card is predicted to exceed a threshold value is a first mode of operation in response to said predicted bandwidth utilization of said PCIe link exceeding said threshold value (Patil – Par. [0054] discloses a system 100 uses the software control loop 130 to obtain predictions about data traffic patterns, including data transfer rates, ingress bandwidth requirements, egress bandwidth requirements, and relative sizes of data being routed via certain interfaces for a given workload. Specifically, the control loop uses the software agent to compute asymmetric bandwidth requirements of connections at component interface 110. Based on the computed bandwidth requirements, the software agent is operable to output a predicted ratio of ingress to egress bus lanes that can most efficiently handle the predicted data traffic patterns. For example, the asymmetric bandwidth requirement of component interface 110 can include a 3:1 ratio of ingress bus lanes relative to egress bus lanes. This ratio enables the ingress signaling bandwidth to be dynamically adjusted or increased to meet the example requirements for certain image recognition workloads that may range from 250 GB to 300 GB.); and determining if there is currently traffic on said PCIe link involving said PCIe card in response to said mode of operation of said PCIe card being in a second mode of operation (Patil – Par. [0055] discloses the maximum signaling bandwidth in any one direction (e.g., ingress or egress) is limited to the symmetric signaling bandwidth or the static data link configuration at the system's component interfaces. For example, referencing the signaling bandwidths mentioned above, the ingress bandwidth in these other systems will be limited to 200 GB based on their symmetric data links even though the actual data flow via the ingress path greatly exceeds 200 GB.). Referring to claim 5, Patil discloses the method as recited in claim 4 further comprising: routing traffic to a different PCIe link in response to currently having traffic on said PCIe link involving said PCIe card (Patil – Par. [0054] discloses a system 100 uses the software control loop 130 to obtain predictions about data traffic patterns, including data transfer rates, ingress bandwidth requirements, egress bandwidth requirements, and relative sizes of data being routed via certain interfaces for a given workload. Specifically, the control loop uses the software agent to compute asymmetric bandwidth requirements of connections at component interface 110. Based on the computed bandwidth requirements, the software agent is operable to output a predicted ratio of ingress to egress bus lanes that can most efficiently handle the predicted data traffic patterns. For example, the asymmetric bandwidth requirement of component interface 110 can include a 3:1 ratio of ingress bus lanes relative to egress bus lanes. This ratio enables the ingress signaling bandwidth to be dynamically adjusted or increased to meet the example requirements for certain image recognition workloads that may range from 250 GB to 300 GB.); and selecting a configuration setting of said PCIe card to implement said first mode of operation upon routing traffic to said different PCIe link (Patil – Par. [0035] disclose a system 100 can include multiple component interfaces and each component interface is configured to allow data traffic to flow symmetrically or asymmetrically between components at the interface.). Referring to claim 6, Patil discloses the method as recited in claim 1 further comprising: determining if a mode of operation of said PCIe card is a second mode of operation in response to said predicted bandwidth utilization of said PCIe link not exceeding a threshold value (Patil – Par. [0055] discloses the maximum signaling bandwidth in any one direction (e.g., ingress or egress) is limited to the symmetric signaling bandwidth or the static data link configuration at the system's component interfaces. For example, referencing the signaling bandwidths mentioned above, the ingress bandwidth in these other systems will be limited to 200 GB based on their symmetric data links even though the actual data flow via the ingress path greatly exceeds 200 GB.); and determining if there is currently traffic on said PCIe link involving said PCIe card in response to said mode of operation of said PCIe card being in a first mode of operation (Patil – Par. [0054] discloses a system 100 uses the software control loop 130 to obtain predictions about data traffic patterns, including data transfer rates, ingress bandwidth requirements, egress bandwidth requirements, and relative sizes of data being routed via certain interfaces for a given workload. Specifically, the control loop uses the software agent to compute asymmetric bandwidth requirements of connections at component interface 110. Based on the computed bandwidth requirements, the software agent is operable to output a predicted ratio of ingress to egress bus lanes that can most efficiently handle the predicted data traffic patterns. For example, the asymmetric bandwidth requirement of component interface 110 can include a 3:1 ratio of ingress bus lanes relative to egress bus lanes. This ratio enables the ingress signaling bandwidth to be dynamically adjusted or increased to meet the example requirements for certain image recognition workloads that may range from 250 GB to 300 GB.). Referring to claim 7, Patil discloses the method as recited in claim 6 further comprising: routing traffic to a different PCIe link in response to currently having traffic on said PCIe link involving said PCIe card (Patil – Par. [0055] discloses the maximum signaling bandwidth in any one direction (e.g., ingress or egress) is limited to the symmetric signaling bandwidth or the static data link configuration at the system's component interfaces. For example, referencing the signaling bandwidths mentioned above, the ingress bandwidth in these other systems will be limited to 200 GB based on their symmetric data links even though the actual data flow via the ingress path greatly exceeds 200 GB.); and selecting a configuration setting of said PCIe card to implement said second mode of operation upon routing traffic to said different PCIe link (Patil – Par. [0035] disclose a system 100 can include multiple component interfaces and each component interface is configured to allow data traffic to flow symmetrically or asymmetrically between components at the interface.). Referring to claims 11 and 18, note the rejections of claim 4 above. The Instant Claims recite substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claims 12 and 19, note the rejections of claim 5 above. The Instant Claims recite substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claims 13 and 20, note the rejections of claim 6 above. The Instant Claims recite substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Referring to claim 14, note the rejections of claim 7 above. The Instant Claim recites substantially same limitations as the above-rejected and is therefore rejected under same prior-art teachings. Allowable Subject Matter 8. Claims 2, 3, 9, 10, 16 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The examiner finds that the prior art of record taken alone or in combination fails to teach and/or fairly suggest “selecting a configuration setting of said PCIe card to implement a first mode of operation in response to said predicted bandwidth utilization of said PCIe link exceeding a threshold value, wherein said first mode of operation is a later version of PCIe than a second mode of operation.”, in combination with other recited limitations in dependent claims 2, 9 and 16. The examiner finds that the prior art of record taken alone or in combination fails to teach and/or fairly suggest “selecting a configuration setting of said PCIe card to implement a second mode of operation in response to said predicted bandwidth utilization of said PCIe link not exceeding a threshold value, wherein said second mode of operation is an earlier version of PCIe than a first mode of operation.”, in combination with other recited limitations in dependent claims 3, 10 and 17. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAYTON LEWIS-TAYLOR whose telephone number is (571)270-7754. The examiner can normally be reached on Monday through Thursday, 8AM TO 4PM, EASTERN TIME. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye, can be reached on 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAYTON LEWIS-TAYLOR/ Examiner, Art Unit 2181 /Farley Abad/ Primary Examiner, Art Unit 2181
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Mar 06, 2025
Non-Final Rejection — §102
Jun 12, 2025
Response Filed
Jun 14, 2025
Final Rejection — §102
Jul 23, 2025
Response after Non-Final Action
Sep 18, 2025
Request for Continued Examination
Sep 21, 2025
Response after Non-Final Action
Sep 28, 2025
Non-Final Rejection — §102
Dec 31, 2025
Response Filed
Feb 21, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585491
PROCESSING OF INTERRUPTS
2y 5m to grant Granted Mar 24, 2026
Patent 12585610
COMPUTING SYSTEM, PCI DEVICE MANAGER AND INITIALIZATION METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12578901
CLOCK DOMAIN CROSSING
2y 5m to grant Granted Mar 17, 2026
Patent 12572496
HOST FABRIC ADAPTER WITH FABRIC SWITCH
2y 5m to grant Granted Mar 10, 2026
Patent 12572497
DETECTION OF A STUCK DATA LINE OF A SERIAL DATA BUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
84%
With Interview (+3.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 701 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month