Prosecution Insights
Last updated: April 19, 2026
Application No. 18/327,813

DYNAMIC REALLOCATION OF DISPLAY MEMORY BANDWIDTH BASED ON SYSTEM STATE

Non-Final OA §102§103
Filed
Jun 01, 2023
Examiner
FOSTER, THOMAS JOHN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Ati Technologies Ulc
OA Round
3 (Non-Final)
95%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 95% — above average
95%
Career Allow Rate
19 granted / 20 resolved
+33.0% vs TC avg
Moderate +7% lift
Without
With
+7.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
0.8%
-39.2% vs TC avg
§103
72.7%
+32.7% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Allowable Subject Matter Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As per claims 13 and 20, these claims have limitations that substantially mirror that of claim 6, and thus is allowed for similar reasons. Response to Arguments Applicant’s arguments, see remarks pg. 7, filed 02/26/2026, with respect to the rejection(s) of claim(s) 1-5, 7-12, and 14-19 under 102 and 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Wasserman. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4-5, and 18-19 are rejected under 35 U.S.C. 102 (a)(1) or (a)(2) or both as being unpatentable by of Wasserman (Pub No. US 20030043155 A1). As per claim 1, Wasserman anticipates the claimed: An apparatus comprising: a control circuit, wherein responsive to a condition being satisfied for changing one or more operating parameters of a memory subsystem, the control circuit is configured to: (Wasserman teaches a circuit to control the process. Wasserman [0063]: “In some embodiments, media processor 14 and hardware accelerator 18 may be comprised within the same integrated circuit. In other embodiments, portions of media processor 14 and/or hardware accelerator 18 may be comprised within separate integrated circuits.” Wasserman [0102]: “The requesters 709 for each channel may be configured to assert a request when certain conditions occur. For example, a channel's requester 709 may begin asserting a request after a vertical blanking period has finished and continue asserting requests until the beginning of the next vertical blanking interval. However, in many embodiments (e.g., embodiments in which the display information buffer 701 is configured to output bursts of graphics data), it may be preferable to have each channel structure its requests so that it can prefetch data. By prefetching data, each channel may be able to ensure that its data needs are met by taking into account the latency of the request process and the delay that may result from having to wait for another channel's request(s) to be served. Thus, in these embodiments, the requesters 709 may be configured to begin asserting requests at some time before the end of a vertical blanking interval and to cease asserting requests at some time before the beginning of the next vertical blanking interval.” The asserting a request after a vertical blanking period is the changing condition.) send a first indication distinct from memory access transactions to a communication fabric that causes an increase in memory bandwidth, of the memory subsystem, allocated to a display controller; and (The applicant shows a communication fabric as being an interconnected group of memory and processors. Wasserman fig. 4 shows a connection of memory, controllers, MPUs, and interfaces. This is the communication fabric. Wasserman [0136]: “In order to minimize the bandwidth lost when switching channels, the arbiter 805 may be configured to arbitrate between the request streams so that the requests the arbiter sends to the frame buffer 22 alternate between even and odd requests more consistently. If the individual request streams each alternate between even and odd requests, one way to increase the bandwidth is to forward a certain number of consecutive requests from one channel before forwarding to any other channel's requests. For example, in one embodiment, the arbiter 805 may have a "lockstep" mode where the arbiter forwards at least two consecutive requests (even followed by odd or odd followed by even) from one channel before forwarding another channel's requests. For example, if the arbiter 805 is configured to determine which channel is neediest based on the number of valid blocks in the channels' block queues, the next "neediness" comparison may not be performed until after two consecutive requests have been forwarded from the current neediest channel.” The consecutive requests are the first indication to increase the bandwidth. The request is distinct from a memory access transaction and is done by an arbiter, to address the remarks in the middle of applicants remarks pg. 7. Wasserman concerns a display controller to implement this, as indicated by the title “Multi-channel, Demand-driven Display Controller”.). send a second indication to the display controller which causes the display controller to prefetch display data from the memory subsystem. (Wasserman teaches a request to prefetch data from a display information buffer. Wasserman [0102]: “The requesters 709 for each channel may be configured to assert a request when certain conditions occur. For example, a channel's requester 709 may begin asserting a request after a vertical blanking period has finished and continue asserting requests until the beginning of the next vertical blanking interval. However, in many embodiments (e.g., embodiments in which the display information buffer 701 is configured to output bursts of graphics data), it may be preferable to have each channel structure its requests so that it can prefetch data. By prefetching data, each channel may be able to ensure that its data needs are met by taking into account the latency of the request process and the delay that may result from having to wait for another channel's request(s) to be served. Thus, in these embodiments, the requesters 709 may be configured to begin asserting requests at some time before the end of a vertical blanking interval and to cease asserting requests at some time before the beginning of the next vertical blanking interval.” The request that leads to prefetching data is the second indication.). As per claim 4, Wasserman anticipates the claimed: The apparatus as recited in claim 1, wherein the control circuit is further configured to send a third indication that causes a decrease in memory bandwidth allocated to the display controller. (Wasserman [0135]: “This example may be generalized to the situation where the arbiter forwards one request from channel A for every N requests from channel B (e.g., because channel B is N times faster than channel A). Since bandwidth reduction may occur once every N+1 requests, less bandwidth may be lost when switching between the channels' requests as N increases. Conversely, as N decreases, the bandwidth loss may become more significant. For example, if the two channels are requesting data at approximately the same rate, the resulting request stream forwarded by the need-based arbiter may be: EeOoEeOoEeOo. In this situation, bandwidth reduction may occur as often as every two requests.” The third indication is the request is the one that causes the reduced bandwidth.). As per claim 5, Wasserman anticipates the claimed: The apparatus as recited in claim 4, wherein the control circuit is configured to convey the third indication responsive to the display controller completing the prefetch of the display data from the memory subsystem. (Wasserman [0137]-[0138]: “[0137]: By using a lockstep mode, the arbiter may prevent the extreme bandwidth loss that may occur for small values of N. For example, if N=1, a lockstep arbiter may forward the request stream EOeoEOeoEOeo (instead of the request stream EeOoEeOoEeOo, which would be forwarded by a non-lockstep arbiter). Thus, by rearranging the forwarded request stream to alternate between even and odd requests, a lockstep arbiter may decrease the loss bandwidth for the two request streams. As a result, lockstep mode may reduce the inefficiencies caused by sharing the frame buffer between multiple display channels. [0138] Since the channels are prefetching, using a lockstep mode may not cause any channel to `starve` for data as long as the channels' requesters take into account the additional delay that may result from the lockstep mechanism. Thus, each request may be configured to prefetch data far enough in advance to account for the delay that occurs when a request in the wholesale loop has to wait for two consecutive requests from another channel to be serviced.” The requests described above that decreases the bandwidth is the third indication. The requests, including these one, can prefetch the data. Thus, the prefetch is concluded.). As per claims 18-19, these claims are similar in scope to limitations recited in claims 4-5, respectively, and thus are rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3, 8-9,11-12, 15-16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Wasserman in view of Ray (US Patent No. 20230014565 A1). As per claim 2, Wasserman fails to disclose teaches the claimed: However, Wasserman in combination with Ray teaches the apparatus as recited in claim 1, wherein, responsive to the second indication, the control circuit causes a change to one or more arbitration attributes associated with memory requests generated by one or more computing clients. (Ray [0403]: “The memory access circuitry can then submit a memory access request to the L1 cache with the arbitrated cache attributes (3109). The arbitrated cache attributes are the L1 and L3 cache attributes that are selected from multiple sources based in source priority”. The attributes are used to make memory requests, e.g. in [0404] “FIG. 32 illustrates a method 3200 of determining L3 cache attributes for memory requests, according to an embodiment. In one embodiment, the method 3200 is performed by cache control logic associated with an L3 cache of a graphics processor or compute accelerator.” Ray teaches that it can be configured for multiple clients e.g. in [0147]. “In one embodiment, each graphics processing engine 431-432, N may be presented to the hypervisor 496 as a distinct graphics processor device. QoS settings can be configured for clients of a specific graphics processing engine 431-432, N and data isolation”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the change arbitration attributes associated with memory requests from different as taught by Ray with the system of Wasserman in order to control the organize and characterize the uses of the memory taught by Wasserman to organize them and control them. As per claim 3, Wasserman fails to disclose teaches the claimed: However, Wasserman in combination with Ray teaches the apparatus as recited in claim 2, wherein the arbitration attributes include a priority level (Ray [0383]: “The priority associated with prefetch operations indicates a pre-configured likelihood that prefetched data will be selected for eviction by the L1 cache replacement algorithm based on the rate or frequency in which the prefetched data is accessed. In one embodiment, the cacheability attribute for prefetched data determines the cache replacement policy that is used when determining whether to evict that data, with multiple cache replacement policies being active for different sets of cache lines.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to the priority levels of the attributes of memory requests as taught by Ray with the system of Wasserman in order to use the characteristics of the request to give different ones more importance of others based on various characteristics and perform the requested tasks in an optimized manner. Claim 8 has similar limitations to claims 1 and 2, thus, it is rejected under the same rationale as claim 1 and 2 As per claims 9 and 16, these claims are similar in scope to limitations recited in claim 2 and thus are rejected under the same rationale. As per claims 11-12, these claims are similar in scope to limitations recited in claims 4-5, respectively, and thus are rejected under the same rationale. Claim 15 has similar limitations to claims 1-2, thus it is rejected under the same rationale as claims 1-2. Claims 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Khoury in view of Ray in further view of Fukami (US Patent no. US 10423558 B1). As per claim 10, Wasserman modified by Ray fails to disclose the claimed: The method as recited in claim 9, wherein the arbitration attributes include a size of the data. Fukami teaches the method as recited in claim 9, wherein the arbitration attributes include a size of the data ( Fukami col. 6 lines 7-16: “an application identifier or type, such as a real-time application, an indication of traffic type, such as real-time traffic or low latency traffic or bulk traffic, a bandwidth requirement or a latency tolerance requirement, and an indication of a data size associated with the request and so forth. Similarly, data selection logic 196 in control and data arbiters 138 selects the write data of the write request among other data transactions based on one or more of a priority level, age, data size of the write data, and so forth.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the size of the data as taught by Fukami with the system of Wasserman modified by Ray in order Measure the size of the data in the system to prioritize the data to be processed an allocate the proper resources to them. Regarding claim 17, Wasserman fails to teach "the computing system as recited in claim 16, wherein the arbitration attributes include a source identifier of the data." However, Fukami in combination with Khoury teaches the claimed the computing system as recited in claim 17, wherein the arbitration attributes include a source identifier of the data. (Fukami in col. 6 lines 5-15: “Control selection logic 192 in control and data arbiters 138 selects the write command among other commands and messages based on attributes that include one or more of an age, a priority level, a quality-of-service parameter, a source identifier, an application identifier or type, such as a real-time application, an indication of traffic type, such as real-time traffic or low latency traffic or bulk traffic, a bandwidth requirement or a latency tolerance requirement, and an indication of a data size associated with the request and so forth. Similarly, data selection logic 196 in control and data arbiters 138 selects the write data of the write request among other data transactions based on one or more of a priority level, age, data size of the write data, and so forth”. The control selection logic different attributes to arbitrate write commands to memory. Along with age, priority level, and application identifier, a source identifier is one of the attributes used.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a source identifier for memory requests as taught by Ray with the system of Wasserman in order to control the change the structure of the memory request based on its source, or the client the request came from. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Wasserman in view of Khoury (Pub No. US 8810589 B1). As per claim 7, Wasserman alone does not explicitly teach the remaining claim limitations. However, Wasserman in combination with Khoury teaches the claimed: The apparatus as recited in claim 1, wherein the one or more operating parameters include a power state change of the memory subsystem ( Please see, Khoury in col. 5 lines 1-10: "The control module 101 suitably uses a relatively low power memory, such as a L2 cache, that is configured to selectively enable, in a power saving mode, a pre-fetch-and-forward style of operation to pre-fetch pixel data in a burst form from the main memory 102 into the low power memory, and forward the pixel data in a stream form from the low power memory to the display module 103 for producing images on the screen. Thus, the main memory 102 enters the power-saving mode between bursts of data pre-fetching, when no memory access activity is performed".). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the change in power state related to a memory as taught by Khoury with the system of Wasserman in order to change the power being provided to the memory system to better accommodate the memory transaction. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Wasserman in view of Tinnakornsrisuphap (Pub No. US 20170187595 A1). As per claim 14, Wasserman alone does not explicitly teach the claimed limitations. However, Wasserman in combination with Tinnakornsrisuphap teaches the claimed: 14. The method as recited in claim 8, wherein the condition comprises a change in a number of memory accesses generated by one or more clients exceeds a threshold. (Tinnakornsrisuphap concerns wireless access for client devices, and increasing bandwidth based on several circumstances relating to the configuration of access points. Tinnakornsrisuphap [0003]: “To increase available bandwidth, access points are configured with an increasing number of radios capable of transmitting and receiving signals in a variety of frequency bands. For example, a three-radio access point may be configured to support simultaneous operation on three channels. Three-radio operation may be enabled by communication protocols such as Institute of Electrical and Electronic Engineers (IEEE) 802.11ac. IEEE 802.11ac provides for multi-user multiple-input multiple-output (MU-MIMO) operation, which supports simultaneous communication from the access point to multiple clients. MU-MIMO operation may thus substantially improve communication with the access point.” Tinnakornsrisuphap teaches selecting access points based on the number of clients related to a threshold. Tinnakornsrisuphap [0081]: “Based on the determined capabilities of one or more client devices associated with the access point, the processor may select an access point radio to be activated in block 416. In some embodiments, the processor may select the radio of the access point based on the determination that a number of client devices associated with the access point is above the client threshold. In some embodiments, the processor may select the radio of the access point based on a determination that a number of client devices associated with a particular radio of the access point is above the client threshold. In some embodiments, the processor may select the radio of the access point based on the determination that the access point has received an association request from a priority client.” The change of access point can result in an increase in bandwidth. Tinnakornsrisuphap [0085]: “For example, the access point 200a, 200b and range extender 200c, 200d may use one or more of a 2.4 GHz radio and a 5 GHz radio for communication. A 2.4 GHz radio may provide a longer communication range and/or additional communication bandwidth between the access point 200a, 200b and the range extender 200c, 200d. Thus, in some embodiments, the access point 200a, 200b and/or the range extender 200c, 200d may evaluate a communication link between the access points (such as for example a wireless backhaul communication link), and one or both of the access point and the ranges may determine to migrate a communication link to a radio that may provide a longer range and/or additional bandwidth (e.g., from a 5 GHz radio to a 2.4 GHz radio).” While it is used for radio links, it would be obvious to use the indications for the change of bandwidth for a communication system for display data.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the change of bandwidth for memory access related to number of client devices as taught by Tinnakornsrisuphap with the system of Wasserman in order to adjust the requests for data transmission and the bandwidth based on the number of clients requesting access to the data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS JOHN FOSTER whose telephone number is (571)272-5053. The examiner can normally be reached Mon, Fri 8:30-6. Tues-Thurs 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS JOHN FOSTER/Examiner, Art Unit 2616 /HAI TAO SUN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Apr 12, 2025
Non-Final Rejection — §102, §103
Aug 07, 2025
Response Filed
Sep 24, 2025
Final Rejection — §102, §103
Jan 16, 2026
Applicant Interview (Telephonic)
Jan 16, 2026
Examiner Interview Summary
Feb 26, 2026
Request for Continued Examination
Feb 27, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597097
INFORMATION PROCESSING DEVICE, MEASUREMENT SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592031
IMAGE PROCESSING METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586272
Methods and Systems for Transferring Hair Characteristics from a Reference Image to a Digital Image
2y 5m to grant Granted Mar 24, 2026
Patent 12586158
IMAGE SIGNAL PROCESSOR FOR A COMPOSITE CHROMINANCE IMAGE AND A COMPOSITE WHITE IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586143
METHOD, DEVICE, AND PRODUCT FOR GPU CLUSTER
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
95%
Grant Probability
99%
With Interview (+7.1%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month