Prosecution Insights
Last updated: April 19, 2026
Application No. 19/172,788

TIMING CONTROLLER AND METHOD OF DRIVING TIMING CONTROLLER

Non-Final OA §103
Filed
Apr 08, 2025
Examiner
AU, SCOTT D
Art Unit
2624
Tech Center
2600 — Communications
Assignee
LX SEMICON CO., LTD.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
397 granted / 518 resolved
+14.6% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
18 currently pending
Career history
536
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
66.0%
+26.0% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 518 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/08/2025 has been placed in record and considered by the examiner. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 11-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Holland et al. (US 2014/0232731 hereinafter Holland) in view of Hong ( US 2005/0140619 hereinafter Hong), and Byun et al. (US 2020/0334166 hereinafter Byun). Referring to claim 1, Holland discloses a method of driving a controller ([0016], Fig. 1; controller unit of system 100), comprising: requesting, by a processor (Fig. 1; processor unit 108, display processing unit 110), first line data from a controller (Fig. 1; fabric 102) ([0025]; In various embodiments, memory 106 may be controlled by memory controller 104. Accordingly, memory controller 104 may facilitate the performance of read and write operations responsive to data requests received via fabric 102 from units 108 and 110.); extracting, by the controller (Fig. 1; fabric 102) ([0020]; Accordingly, in some embodiments, fabric 102 may include one or more buses, controllers, interconnects, and/or bridges. In some embodiments, fabric 102 may implement a single communication protocol and elements coupled to fabric 102 may convert from the single communication protocol to other communication protocols internally.), the first line data from a storage memory and loading the extracted first line data into a shared memory (Fig. 1; memory 106) ([0026]; In various embodiments, storage device 112 may store program instructions (e.g., applications) executable by processor unit 108. In certain embodiments, storage device 112 may store a plurality of image data that may be transferred to memory 106 (i.e., so that future requests for that data can be served faster) or transferred to display processing unit 110 directly.); transferring, by the controller (Fig. 1; fabric 102), the first line data loaded into the shared memory (Fig. 1; memory 106) to the processor (Fig. 1; processor unit 108, display processing unit 110) ([0026]; In various embodiments, storage device 112 may store program instructions (e.g., applications) executable by processor unit 108. In certain embodiments, storage device 112 may store a plurality of image data that may be transferred to memory 106 (i.e., so that future requests for that data can be served faster) or transferred to display processing unit 110 directly.). However, Holland does not explicitly disclose a method of driving a timing controller, comprising: reading, by the controller, update data for updating an operation of the storage memory from the storage memory in response to the transfer of the first line data. In an analogous art, Hong discloses a method of driving a timing controller (Hong- [0042]; FIG. 5 is a detailed block diagram showing the timing controller of the driving apparatus shown in FIG. 4. As shown in FIG. 5, the timing controller 38 includes a gate control signal generator 50, a data control signal generator 52 and the encoding block 40. The gate control signal generator 50 generates the gate control signal GCS using the vertical/horizontal synchronizing signals V and H, the clock signal DCLK and the data enable signal DE. In particular, the gate control signal GCS may include a gate start pulse GSP, a gate shift clock GSC and a gate output enable signal GOE.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Hong to the system of Holland in order to minimize the bit transition amount of the pixel data before applying to the data driver. However, Holland in view of Hong does not explicitly disclose reading, by the controller, update data for updating an operation of the storage memory from the storage memory in response to the transfer of the first line data. In an analogous art, Byun discloses reading, by the controller, update data for updating an operation of the storage memory from the storage memory in response to the transfer of the first line data (Byun- [0141], Fig. 2-5; The controller 130 in the memory system 110 can control (e.g., create, delete, update, etc.) the first mapping information or the second mapping information, and store either the first mapping information or the second mapping information in the memory device 150. Because the host memory 106 in the host 102 is a volatile memory, the metadata 166 stored in the host memory 106 may disappear when an event such as interruption of power supply to the host 102 and the memory system 110 occurs. Accordingly, the controller 130 in the memory system 110 keep the latest state of the metadata 166 stored in the host memory 106 of the host 102, and also store the first mapping information or the second mapping information in the memory device 150. The first mapping information or the second mapping information stored in the memory device 150 can be the most recent one.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Byun to the system of Holland in view of Hong in order to improve the data input and output performance of the memory system. Referring to claim 2, Holland as modified by Byun discloses further comprising loading the update data read by the controller into the shared memory (Byun-[0312]; In this case, the controller 130 may not request one map data L2P, but may request a map segment as a plurality of map data L2P for a preset range of logical addresses including the corresponding logical address. Further, the memory device 150 may transmit the corresponding map segment to the controller 130. When specific map data L2P is updated, the controller 130 may update the map data L2P in the map segment loaded to the memory 144, and then program the updated map segment to the memory device 150 at a specific time point.). Referring to claim 3, Holland as modified by Byun discloses wherein the loading includes loading the update data read by the controller into a reserved area of the shared memory (Byun- [0136]; As an amount of data which can be stored in the memory system 110 increases, an amount of metadata corresponding to the data stored in the memory system 110 also increases. When storage capability used to load the metadata in the memory 144 of the controller 130 is limited or restricted, the increase in an amount of loaded metadata may cause an operational burden on operations of the controller 130. For example, because of limitation of space or region allocated for metadata in the memory 144 of the controller 130, only a part of the metadata may be loaded.). Referring to claim 4, Holland as modified by Byun discloses wherein the reserved area is a different area from an area in which the line data is loaded among storage areas of the shared memory (Byun- [0109]; the memory 144 may be implemented with a volatile memory. For example, the memory 144 may be implemented with a static random access memory (SRAM), a dynamic random access memory (DRAM), or both. Thus, SRAM has reserved memory space…. and Fig. 20; memory 144 has a map cache 1441 and a map management data 1442 reads on “different storage areas”.). Referring to claim 5, Holland as modified by Byun discloses wherein the shared memory (Fig. 20; memory 144) is included in the controller (Fig. 20; controller 130), and the storage memory (Fig. 20; memory 144) stores a plurality of pieces of region data including a plurality of pieces of line data (Byun- Fig. 20; memory 144 comprises a map cache 1441 that stores segments data read on “pieces of region data” and a map management data 1442 that stores MM/MISS tables read on “pieces of line of data”). Referring to claim 6, Holland as modified by Byun discloses where the requesting includes: transmitting, by the processor, a first request signal requesting the loading of the first line data, and a first address signal for the first line data to be loaded to the controller (Byun- [0053]; In an embodiment, a memory system can include a memory device configured to store a piece of data in a location which is distinguished by a physical address; and a controller configured to generate a piece of map data associating a logical address, inputted along with a request from an external device, with the physical address and to determine a timing of transferring the piece of map data into the external device to avoid decreasing an input/output throughput of the memory system…, [0054]; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory, to calculate a map miss ratio based on a total read count and the map miss count and to determine the timing of transferring the piece of map data based on the map miss ratio…, and [0055]; The controller can be configured to select the piece of map data, which would be transferred, based on a read count corresponding to a map segment including the piece of map data.); and receiving, by the controller, the first request signal and the first address signal (Byun- [0053]; In an embodiment, a memory system can include a memory device configured to store a piece of data in a location which is distinguished by a physical address; and a controller configured to generate a piece of map data associating a logical address, inputted along with a request from an external device, with the physical address and to determine a timing of transferring the piece of map data into the external device to avoid decreasing an input/output throughput of the memory system.). Referring to claim 7, Holland as modified by Byun discloses wherein the loading includes: extracting, by the controller, the first line data corresponding to the first address signal from the storage memory in response to the first request signal (Byun- [0054; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory); loading, by the controller, the extracted first line data into the shared memory (Byun- [0054; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory); and transmitting, by the controller, a first completion signal to the processor upon completing the loading of the extracted first line data ([0055]; The controller can be configured to select the piece of map data, which would be transferred, based on a read count corresponding to a map segment including the piece of map data). Referring to claim 8, Holland as modified by Byun discloses wherein the reading includes reading, by the controller, the update data from the storage memory, and loading the read update data into the shared memory (Byun- [0053]; In an embodiment, a memory system can include a memory device configured to store a piece of data in a location which is distinguished by a physical address; and a controller configured to generate a piece of map data associating a logical address, inputted along with a request from an external device, with the physical address and to determine a timing of transferring the piece of map data into the external device to avoid decreasing an input/output throughput of the memory system…, [0054]; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory, to calculate a map miss ratio based on a total read count and the map miss count and to determine the timing of transferring the piece of map data based on the map miss ratio…, and [0108]; The memory 144 may be a sort of working memory in the memory system 110 or the controller 130, while storing temporary or transactional data occurred or delivered for operations in the memory system 110 and the controller 130. For example, the memory 144 may temporarily store a piece of read data outputted from the memory device 150 in response to a request from the host 102, before the piece of read data is outputted to the host 102.). Referring to claim 9, Holland as modified by Byun discloses further comprising: after loading all the update data into the shared memory, receiving, by the controller, a second request signal requesting loading of second line data and a second address signal for the second line data to be loaded from the processor; extracting, by the controller, the second line data corresponding to the second address signal from the storage memory in response to the second request signal; loading, by the controller, the extracted second line data into the shared memory; and upon completing the loading of the extracted second line data, transmitting, by the controller, a second completion signal to the processor (Byun- [0092]; The host 102 may transmit a plurality of commands corresponding to the user's requests into the memory system 110, thereby performing operations corresponding to commands within the memory system 110…, [0108]; The memory 144 may be a sort of working memory in the memory system 110 or the controller 130, while storing temporary or transactional data occurred or delivered for operations in the memory system 110 and the controller 130. For example, the memory 144 may temporarily store a piece of read data outputted from the memory device 150 in response to a request from the host 102, before the piece of read data is outputted to the host 102…. and [0114]; When the memory device 150 includes a plurality of dies (or a plurality of chips) including non-volatile memory cells, the controller 130 may be configured to perform a parallel processing regarding plural requests or commands inputted from the host 102 in to improve performance of the memory system 110. For example, the transmitted requests or commands may be divided into and processed simultaneously in a plurality of dies or a plurality of chips in the memory device 150. Thus, the requests read on “receiving, by the controller, a second request signal requesting loading of second line data and a second address signal for the second line data to be loaded from the processor” which performs the repeating steps of the first request.). Referring to claim 11, Holland discloses a controller ([0016], Fig. 1; controller unit of system 100) comprising: a storage memory (Fig. 1; storage 112) that stores a plurality of pieces of region data including a plurality of pieces of line data ([0026]; In various embodiments, storage device 112 may store program instructions (e.g., applications) executable by processor unit 108. In certain embodiments, storage device 112 may store a plurality of image data that may be transferred to memory 106 (i.e., so that future requests for that data can be served faster) or transferred to display processing unit 110 directly.); a processor (Fig. 1; processor unit 108, display processing unit 110) that requests loading of the plurality of pieces of line data ([0025]; In various embodiments, memory 106 may be controlled by memory controller 104. Accordingly, memory controller 104 may facilitate the performance of read and write operations responsive to data requests received via fabric 102 from units 108 and 110. Memory controller 104 may perform various memory physical interface (PHY) functions such as memory refreshing, memory row-address and column-address strobe operations, etc. As discussed below, memory controller 104 may also be used to power-manage memory 106. The image data may be accessed via fabric 102 and transferred to display processing unit 110 as discussed further below.), receives and processes the loaded line data, and transmits the processed line data ([0026]; In various embodiments, storage device 112 may store program instructions (e.g., applications) executable by processor unit 108. In certain embodiments, storage device 112 may store a plurality of image data that may be transferred to memory 106 (i.e., so that future requests for that data can be served faster) or transferred to display processing unit 110 directly.); and a controller (Fig. 1; fabric 102) that includes a shared memory (Fig. 1; memory 106), in which the line data is loaded and transmitted, and loads line data extracted from the storage memory (Fig. 1; storage 112) into the shared memory (Fig. 1; memory 106) in response to the request signal ([0026]; In various embodiments, storage device 112 may store program instructions (e.g., applications) executable by processor unit 108. In certain embodiments, storage device 112 may store a plurality of image data that may be transferred to memory 106 (i.e., so that future requests for that data can be served faster) or transferred to display processing unit 110 directly.), wherein the controller (Fig. 1; fabric 102) is configured to: transfer the line data loaded into the shared memory (Fig. 1; memory 106) to the processor (Fig. 1; processor unit 108, display processing unit 110) ([0026]; In various embodiments, storage device 112 may store program instructions (e.g., applications) executable by processor unit 108. In certain embodiments, storage device 112 may store a plurality of image data that may be transferred to memory 106 (i.e., so that future requests for that data can be served faster) or transferred to display processing unit 110 directly.). However, Holland does not explicitly disclose a timing controller comprising: a processor that requests loading of the plurality of pieces of line data included in the region data using a request signal and an address signal; and read update data for updating an operation of the storage memory from the storage memory in response to the transfer of the line data. In an analogous art, Hong discloses a timing controller (Hong-[0042]; FIG. 5 is a detailed block diagram showing the timing controller of the driving apparatus shown in FIG. 4. As shown in FIG. 5, the timing controller 38 includes a gate control signal generator 50, a data control signal generator 52 and the encoding block 40. The gate control signal generator 50 generates the gate control signal GCS using the vertical/horizontal synchronizing signals V and H, the clock signal DCLK and the data enable signal DE. In particular, the gate control signal GCS may include a gate start pulse GSP, a gate shift clock GSC and a gate output enable signal GOE.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Hong to the system of Holland in order to minimized the bit transition amount of the pixel data before applying to the data driver. However, Holland does not explicitly disclose a processor that requests loading of the plurality of pieces of line data included in the region data using a request signal and an address signal; and read update data for updating an operation of the storage memory from the storage memory in response to the transfer of the line data. In an analogous art, Byun discloses a processor that requests loading of the plurality of pieces of line data included in the region data using a request signal and an address signal (Byun-[0044]; When the external device may include the map information, the external device can transfer a request along with the logical address which the external device uses for indicating a piece of data and the physical address which the memory system independently uses but the external device does not use); and read update data for updating an operation of the storage memory from the storage memory in response to the transfer of the line data (Byun- [0141], Fig. 2-5; The controller 130 in the memory system 110 can control (e.g., create, delete, update, etc.) the first mapping information or the second mapping information, and store either the first mapping information or the second mapping information in the memory device 150. Because the host memory 106 in the host 102 is a volatile memory, the metadata 166 stored in the host memory 106 may disappear when an event such as interruption of power supply to the host 102 and the memory system 110 occurs. Accordingly, the controller 130 in the memory system 110 keep the latest state of the metadata 166 stored in the host memory 106 of the host 102, and also store the first mapping information or the second mapping information in the memory device 150. The first mapping information or the second mapping information stored in the memory device 150 can be the most recent one.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Byun to the system of Holland in view of Hong in order to improve the data input and output performance of the memory system. Referring to claim 12, Holland as modified by Byun discloses wherein the controller reads update data from the storage memory and loads the read update data into the shared memory while the processor is processing the line data (Byun-[0312]; In this case, the controller 130 may not request one map data L2P, but may request a map segment as a plurality of map data L2P for a preset range of logical addresses including the corresponding logical address. Further, the memory device 150 may transmit the corresponding map segment to the controller 130. When specific map data L2P is updated, the controller 130 may update the map data L2P in the map segment loaded to the memory 144, and then program the updated map segment to the memory device 150 at a specific time point.). Referring to claim 13, Holland as modified by Byun discloses wherein the controller loads the read update data into a reserved area of the shared memory (Byun- [0136]; As an amount of data which can be stored in the memory system 110 increases, an amount of metadata corresponding to the data stored in the memory system 110 also increases. When storage capability used to load the metadata in the memory 144 of the controller 130 is limited or restricted, the increase in an amount of loaded metadata may cause an operational burden on operations of the controller 130. For example, because of limitation of space or region allocated for metadata in the memory 144 of the controller 130, only a part of the metadata may be loaded.). Referring to claim 14, Holland as modified by Byun discloses wherein the reserved area is a different area from an area in which the line data is loaded among storage areas of the shared memory (Byun- [0109]; the memory 144 may be implemented with a volatile memory. For example, the memory 144 may be implemented with a static random access memory (SRAM), a dynamic random access memory (DRAM), or both. Thus, SRAM has reserved memory space…. and Fig. 20; memory 144 has a map cache 1441 and a map management data 1442 reads on “different storage areas”.). Referring to claim 15, Holland as modified by Byun discloses wherein the controller, upon receiving the request signal and the address signal from the processor, extracts the line data corresponding to the address signal from the storage memory in response to the request signal, and loads the extracted line data into the shared memory (Byun- [0053]; In an embodiment, a memory system can include a memory device configured to store a piece of data in a location which is distinguished by a physical address; and a controller configured to generate a piece of map data associating a logical address, inputted along with a request from an external device, with the physical address and to determine a timing of transferring the piece of map data into the external device to avoid decreasing an input/output throughput of the memory system…. and [0054]; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory, to calculate a map miss ratio based on a total read count and the map miss count and to determine the timing of transferring the piece of map data based on the map miss ratio.). Referring to claim 16, Holland as modified by Byun discloses wherein the controller reads the update data from the storage memory in response to the transfer of the line data, and loads the read update data into the shared memory (Byun- [0054; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory). Referring to claim 17, Holland as modified by Byun discloses wherein the controller, when a request for next line data is received from the processor after loading all the update data into the shared memory, extracts the next line data from the storage memory, and stores the extracted next line data in the shared memory (Byun- [0053]; In an embodiment, a memory system can include a memory device configured to store a piece of data in a location which is distinguished by a physical address; and a controller configured to generate a piece of map data associating a logical address, inputted along with a request from an external device, with the physical address and to determine a timing of transferring the piece of map data into the external device to avoid decreasing an input/output throughput of the memory system…, [0054]; By the way of example but not limitation, the controller can be configured to determine a map miss count based on whether the physical address associated with the logical address, which is inputted from the external device, is loaded in a cache memory, to calculate a map miss ratio based on a total read count and the map miss count and to determine the timing of transferring the piece of map data based on the map miss ratio…, and [0108]; The memory 144 may be a sort of working memory in the memory system 110 or the controller 130, while storing temporary or transactional data occurred or delivered for operations in the memory system 110 and the controller 130. For example, the memory 144 may temporarily store a piece of read data outputted from the memory device 150 in response to a request from the host 102, before the piece of read data is outputted to the host 102.). Referring to claim 20, Holland as modified by Byun discloses wherein the update data has the same size as a size of one sector of the storage memory ([0210]; User data to be stored in the memory device 150 may be divided by the unit of a segment having a preset size. The preset size may be the same as a minimum data size required for the memory system 110 to interoperate with the host 102. According to an embodiment, a size of a data segment as the unit of user data may be determined according to a configuration and a control method in the memory device 150.). Claims 10 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Holland et al. (US 2014/0232731 hereinafter Holland) in view of Hong ( US 2005/0140619 hereinafter Hong), Byun et al. (US 2020/0334166 hereinafter Byun), and Schober (US 6,976,142 hereinafter Schober). Referring to claim 10, Holland as modified by Byun discloses wherein the reading further includes: receiving, by the controller, a second request signal requesting loading of second line data and a second address signal for the second line data to be loaded from the processor while loading the update data into the shared memory; upon loading all the update data into the shared memory, extracting, by the controller, the second line data corresponding to the second address signal from the storage memory in response to the second request signal; loading, by the controller, the extracted second line data into the shared memory; and upon completing the loading of the extracted second line data, transmitting, by the controller, a second completion signal to the processor (Byun- [0092]; The host 102 may transmit a plurality of commands corresponding to the user's requests into the memory system 110, thereby performing operations corresponding to commands within the memory system 110…, [0108]; The memory 144 may be a sort of working memory in the memory system 110 or the controller 130, while storing temporary or transactional data occurred or delivered for operations in the memory system 110 and the controller 130. For example, the memory 144 may temporarily store a piece of read data outputted from the memory device 150 in response to a request from the host 102, before the piece of read data is outputted to the host 102…. and [0114]; When the memory device 150 includes a plurality of dies (or a plurality of chips) including non-volatile memory cells, the controller 130 may be configured to perform a parallel processing regarding plural requests or commands inputted from the host 102 in to improve performance of the memory system 110. For example, the transmitted requests or commands may be divided into and processed simultaneously in a plurality of dies or a plurality of chips in the memory device 150. Thus, the requests read on “receiving, by the controller, a second request signal requesting loading of second line data and a second address signal for the second line data to be loaded from the processor while loading the update data into the shared memory” which performs the repeating steps of the first request.). However, Holland in view of Hong, and Byun as applied above does not explicitly disclose holding, by the controller, the second request signal and the second address signal. In an analogous art, Schober discloses holding, by the controller, the second request signal and the second address signal (Schober- Col. 7 lines 36-49; For example, in situations where access to the same memory location is attempted within two clock cycles of a preceding read, the data may be stale. This results because while the second request is accessing the data, the previous request has already retrieved the data and may be modifying the data. Thus, the second request is not working with an accurate copy of the data. The typical method of preventing the retrieval of stale data (e.g., inaccurate data) is to hold the second request back until the first request has modified (e.g., via computations, etc.) and written the data back to memory. However, in the exemplary embodiment, as will be explained in FIG. 9 below, bypasses may be utilized to allow the access of data in consecutive clock cycles while at the same time assuring the accuracy of the data.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Schober to the system of Holland in view of Hong, and Byun in order to prevent the retrieval of stale data upon the second request. Referring to claim 18, Holland in view of Hong, and Byun as applied above does not explicitly disclose wherein the controller, when a request for next line data is received from the processor while loading the update data into the shared memory, holds the request for the next line data. In an analogous art, Schober discloses wherein the controller, when a request for next line data is received from the processor while loading the update data into the shared memory, holds the request for the next line data (Schober- Col. 7 lines 36-49; For example, in situations where access to the same memory location is attempted within two clock cycles of a preceding read, the data may be stale. This results because while the second request is accessing the data, the previous request has already retrieved the data and may be modifying the data. Thus, the second request is not working with an accurate copy of the data. The typical method of preventing the retrieval of stale data (e.g., inaccurate data) is to hold the second request back until the first request has modified (e.g., via computations, etc.) and written the data back to memory. However, in the exemplary embodiment, as will be explained in FIG. 9 below, bypasses may be utilized to allow the access of data in consecutive clock cycles while at the same time assuring the accuracy of the data.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Schober to the system of Holland in view of Hong, and Byun in order to prevent the retrieval of stale data upon the second request. Referring to claim 19, Holland in view of Hong, and Byun as applied above does not explicitly disclose wherein the controller holds the request for the next line data during a time taken to load all the update data into the shared memory. In an analogous art, Schober discloses wherein the controller holds the request for the next line data during a time taken to load all the update data into the shared memory (Schober- Col. 7 lines 36-49; For example, in situations where access to the same memory location is attempted within two clock cycles of a preceding read, the data may be stale. This results because while the second request is accessing the data, the previous request has already retrieved the data and may be modifying the data. Thus, the second request is not working with an accurate copy of the data. The typical method of preventing the retrieval of stale data (e.g., inaccurate data) is to hold the second request back until the first request has modified (e.g., via computations, etc.) and written the data back to memory. However, in the exemplary embodiment, as will be explained in FIG. 9 below, bypasses may be utilized to allow the access of data in consecutive clock cycles while at the same time assuring the accuracy of the data.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Schober to the system of Holland in view of Hong, and Byun in order to prevent the retrieval of stale data upon the second request. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT D AU whose telephone number is (571)272-5948. The examiner can normally be reached M-F. General 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SCOTT D AU/Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Apr 08, 2025
Application Filed
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603023
LIGHT MODULATION FOR FOVEATED DISPLAY
2y 5m to grant Granted Apr 14, 2026
Patent 12602109
METHOD FOR AUTOMOTIVE DEVICE TO PROJECT IMAGE ONTO WINDSHIELD FOR VIEWING BY PRIMARY VIEWER
2y 5m to grant Granted Apr 14, 2026
Patent 12586523
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573350
DISPLAY DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573347
DATA DRIVING CIRCUIT AND A DISPLAY DEVICE INCLUDING THE SAME
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
88%
With Interview (+11.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 518 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month