Prosecution Insights
Last updated: April 19, 2026
Application No. 17/721,727

DATA PROCESSING SYSTEM, OPERATING METHOD THEREOF, AND COMPUTING SYSTEM USING THE SAME

Final Rejection §102
Filed
Apr 15, 2022
Examiner
CHIUSANO, ANDREW TSUTOMU
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
SK Hynix Inc.
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
83%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
217 granted / 392 resolved
At TC average
Strong +28% interview lift
Without
With
+28.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
22 currently pending
Career history
414
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 392 resolved cases

Office Action

§102
DETAILED ACTION This Office Action is sent in response to Applicant’s Communication received 11/10/2025 for application number 17/721,727. Claims 1-27 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “controller configured to,” “scheduler configured to,” and “control unit configured to,” in claims 1, 5, 9, 13, 27. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-27 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Luo et al., FullReuse: A Novel ReRAM-based CNN Accelerator Reusing Data in Multiple Levels, see attached NPL. In reference to claim 1, Luo discloses a data processing system (see architecture in fig. 4, page 179) comprising: a controller configured to receive a neural network operation processing request from a host device (filter weights are separately trained offline, i.e. on a host device, and then received, page 179, and then the architecture can receive an IFM from global buffer, pages 179-80, 182); and an in-memory computing device including a plurality of processing elements (plurality of PEs, page 179) and configured to: receive an input feature map and a weight filter from the controller, and perform a neural network operation in the plurality of processing elements based on the weight filter (CNN operation performed by PEs using weights, page 179-80) and a plurality of division maps generated from the input feature map (IFM is divided into portions matching a window, matching the filter size, that slides over IFM, page 178; operation is then performed with filter weights and the IFM portion within the window, pages 179-80), wherein the in-memory computing device performs the neural network operation by storing a reused element, which is operated at least twice among elements constituting the plurality of division maps during the neural network operation, only in the processing unit that first operated the reused element among the plurality of processing units (the first row of the IFM is reused for a plurality of convolution operations as the window slides across the IFM, page 178, subheading B, and is only loaded onto PE 1.1, fig. 6, page 180). In reference to claim 2, Luo discloses the data processing system of claim 1, wherein the reused element is input to one of the plurality of processing elements only once (the IFM portion is only input once, see page 179, subheading C). In reference to claim 3, Luo discloses the data processing system of claim 1, wherein the in-memory computing device performs the neural network operation by performing a plurality of cycles of the neural network operation, each of the cycles being performed by applying the weight filter to a corresponding division map of the plurality of division maps, and wherein the reused element is an element used in at least two of the cycles (plurality of cycles are performed, with the IFM portion being twice, see page 179, subheading C; for example, rows 4-5 of IFM data used in step 1 are reused in step 2). In reference to claim 4, Luo discloses the data processing system of claim 1, wherein the in- memory computing device is further configured to generate the plurality of division maps by dividing the input feature map based on a size of the weight filter and a stride as a moving interval of the weight filter (the IFM is divided into portions based on a window that is the filter size, and the stride that slides over IFM, page 178). In reference to claim 5, Luo discloses the data processing system of claim 1, wherein the in- memory computing device includes: a global buffer in which the input feature map and the weight filter are stored (global buffer stores IFM and weights, page 179); a computing memory including the plurality of processing elements and configured to perform the neural network operation by receiving the plurality of division maps and the weight filter (RRAM subarrays are memory in the PEs that perform the CNN operations, page 179); and a scheduler configured to: store all elements of the weight filter in the processing elements, and distribute and provide the elements of the respective division maps to the processing elements, and wherein the scheduler distributes and provides the elements by: transferring a new element to be initially used in the neural network operation among the elements of the division maps from the global buffer to a corresponding processing element among the plurality of processing elements, and allowing the reused element to be retained in a corresponding processing element, to which the reused element is initially provided among the plurality of processing elements (see pages 178-80: the architecture is configured to distribute and store all the weights in the PEs, and then provide portions of the IFM, from the global buffer, to corresponding PEs, such that the portions of the IFM are reused in subsequent steps). In reference to claim 6, Luo discloses the data processing system of claim 1, wherein each of the plurality of processing elements includes a plurality of sub arrays (PEs comprise plurality of RRAM subarrays, page 179), and wherein the in-memory computing device performs the neural network operation further by: selecting processing elements corresponding to a number of elements of the weight filter among the plurality of processing elements, distributing and storing the elements of the weight filter in the plurality of sub arrays included in the selected processing elements, and distributing and inputting the elements of the respective division maps to the selected processing elements (see pages 178-80: the architecture is configured to distribute and store all the weights in subarrays in the PEs, and then provide portions of the IFM, from the global buffer, to corresponding PEs, such that the portions of the IFM are reused in subsequent steps). In reference to claim 7, Luo discloses the data processing system of claim 6, wherein each sub array is configured of an array of memory cells including memristor devices (the RRAM subarrays comprise memristors, page 177). In reference to claim 8, Luo discloses the data processing system of claim 1, wherein the in-memory computing device performs the neural network operation on each of the plurality of division maps generated by moving, within the input feature map, a convolution window to a row or column direction at a fixed interval, and wherein the reused element is an element overlapping between division maps generated by moving the input feature map in the row direction and/or column direction according to the convolution window (see pages 178-79 and fig. 3: convolution window moves over IFM at a fixed stride, or interval; the portions of the IFM that are reused at the PE are the overlapping portions). In reference to claim 9, Luo discloses a data processing system comprising: a global buffer in which an input feature map and a weight filter are stored (global buffer stores IFM and filter weights, pages 179-80, 182); a computing memory including a plurality of processing elements (plurality of PEs, page 179) and configured to perform a plurality of cycles of a neural network operation by receiving the weight filter (CNN operation performed by PEs using weights, page 179-80) and a plurality of division maps generated from the input feature map (IFM is divided into portions matching a window, matching the filter size, that slides over IFM, page 178; operation is then performed with filter weights and the IFM portion within the window, pages 179-80); and a scheduler configured to: select processing elements corresponding to a number of elements of the weight filter among the plurality of processing elements, store all elements of the weight filter in the selected processing elements (for a number of PEs required for processing the weights, the weights are all written to the subarrays in each PE, page 180, subheading C), and distribute and store elements of the respective division maps in the selected processing elements, wherein the scheduler distributes and stores the elements of the respective division maps (IFM is divided into portions matching a window, matching the filter size, that slides over IFM, page 178; operation is then performed with filter weights and the IFM portion within the window, pages 179-80) by allowing a reused element, which is operated at least twice among the elements of the division maps during the neural network operation, to be retained in a corresponding single processing element, to which the reused element is initially provided among the plurality of processing elements and the scheduler stores the reused element only in the processing unit that first operated the reused element (the first row of the IFM is reused for a plurality of convolution operations as the window slides across the IFM, page 178, subheading B, and is only loaded onto PE 1.1, fig. 6, page 180). In reference to claim 10, Luo discloses the data processing system of claim 9, wherein the scheduler distributes and stores the elements of the respective division maps further by: transferring a new element to be initially used in the neural network operation among the elements of the division maps from the global buffer to a corresponding processing element among the plurality of processing elements, and allowing the reused element not to be moved between the plurality of processing elements (a portion of the IFM data that is newly provided after the first step may reused by a single PE for multiple operations, see page 179, subheading C). In reference to claim 11, Luo discloses the data processing system of claim 9, wherein each of the plurality of processing elements includes a plurality of sub arrays, and wherein the scheduler stores all elements of the weight filter in the selected processing elements by distributing and storing the elements of the weight filter in the plurality of sub arrays included in the selected processing elements (PEs comprise RRAM subarrays, and global buffer stores IFM portions and weights in the subarrays, page 179). In reference to claim 12, Luo discloses the data processing system of claim 11, wherein each sub array is configured of an array of memory cells including memristor devices (the RRAM subarrays comprise memristors, page 177). In reference to claim 13, Luo discloses the data processing system of claim 9, wherein the scheduler is further configured to generate the plurality of division maps generated by moving, within the input feature map, a convolution window to a row or column direction at a fixed interval, and wherein the reused element is reused in any direction of the row and column directions (see pages 178-79 and fig. 3: convolution window moves over IFM at a fixed stride, or interval; the portions of the IFM that are reused at the PE are the overlapping portions). In reference to claim 14, this claim is directed to a method associated with the system claimed in claim 1 and is therefore rejected under a similar rationale. In reference to claim 15, this claim is directed to a method associated with the system claimed in claim 2 and is therefore rejected under a similar rationale. In reference to claim 16, this claim is directed to a method associated with the system claimed in claim 3 and is therefore rejected under a similar rationale. In reference to claim 17, this claim is directed to a method associated with the system claimed in claim 4 and is therefore rejected under a similar rationale. In reference to claim 18, this claim is directed to a method associated with the system claimed in claim 5 and is therefore rejected under a similar rationale. In reference to claim 19, this claim is directed to a method associated with the system claimed in claim 6 and is therefore rejected under a similar rationale. In reference to claim 20, this claim is directed to a method associated with the system claimed in claim 8 and is therefore rejected under a similar rationale. In reference to claim 21, Luo discloses a computing system (see architecture in fig. 4, page 179) comprising: a host device (filter weights are separately trained offline, i.e. on a host device, page 179); and a data processing system configured to: generate a plurality of division maps from an input feature map in response to a neural network operation processing request from the host device (IFM is divided into portions matching a window, matching the filter size, that slides over IFM, page 178; operation is then performed with filter weights and the IFM portion within the window, pages 179-80), and perform a neural network operation in a plurality of processing elements based on a weight filter and the plurality of division maps (CNN operation performed by PEs using weights and IFM portions, page 179-80), wherein the data processing system performs the neural network operation by storing a reused element, which is operated at least twice among elements constituting the plurality of division maps during the neural network operation, only in the processing unit that first operated the reused element among the plurality of processing units (the first row of the IFM is reused for a plurality of convolution operations as the window slides across the IFM, page 178, subheading B, and is only loaded onto PE 1.1, fig. 6, page 180). In reference to claim 22, Luo discloses the computing system of claim 21, wherein the reused io element is input to one of the plurality of processing elements only once (the IFM portion is only input once, see page 179, subheading C). In reference to claim 23, Luo discloses the computing system of claim 21, wherein the data processing system performs the neural network is operation by: transferring a new element to be initially used in the neural network operation from a buffer to a corresponding processing element among the plurality of processing elements, and allowing the reused element to be retained in a corresponding processing element, to which the reused element is initially provided from, among the plurality of processing elements (a portion of the IFM data that is newly provided after the first step may reused by a single PE for multiple operations, see page 179, subheading C). In reference to claim 24, Luo discloses the computing system of claim 21, wherein each of the plurality of processing elements includes a plurality of sub arrays (PEs comprise plurality of RRAM subarrays, page 179), and wherein the data processing system performs the neural network operation further by: selecting processing elements corresponding to a number of elements of the weight filter among the plurality of processing elements, distributing and storing the elements of the weight filter in the plurality of sub arrays included in the selected processing elements, and distributing and inputting the elements of the division maps to the selected processing elements (see pages 178-80: the architecture is configured to distribute and store all the weights in subarrays in the PEs, and then provide portions of the IFM, from the global buffer, to corresponding PEs, such that the portions of the IFM are reused in subsequent steps). In reference to claim 25, Luo discloses the computing system of claim 24, wherein each of the plurality of sub arrays is configured of an array of memory cells including memristor devices (the RRAM subarrays comprise memristors, page 177). In reference to claim 26, Luo discloses the computing system of claim 21, wherein the data processing system generates the plurality of division maps by moving, within the input feature map, a convolution window to a row or column direction at a fixed interval, and wherein the reused element is reused in any direction of the row and column directions (see pages 178-79 and fig. 3: convolution window moves over IFM at a fixed stride, or interval; the portions of the IFM that are reused at the PE are the overlapping portions). In reference to claim 27, Luo discloses an in-memory computing device comprising: processing elements (PEs) (plurality of PEs, page 179) configured to perform a convolution operation on a filter and a division map at each cycle, each PE being configured to perform the convolution operation on an assigned filter element and an assigned map element (CNN operation performed by PEs using weights and portions of the IFM, page 179-80); and a control unit configured to: assign filter elements from the filter to the respective PEs, divide an input map into division maps such that partial map elements are shared by two of the division maps, and assign, at each cycle, map elements from a selected division map to the respective PEs (weights and IFM portions are assigned to PEs, page 179-81), wherein the control unit is further configured to control a selected PE to perform the convolution operation on a re-cycled map element without assigning again the re-cycled map element to the selected PE, at a current cycle, and wherein the control unit assigns the re-cycled map element to the selected PE at a previous cycle and the control unit stores the reused element only in the processing unit that first operated the reused element (the first row of the IFM is reused for a plurality of convolution operations as the window slides across the IFM, page 178, subheading B, and is only loaded onto PE 1.1, fig. 6, page 180). Response to Arguments Applicant's arguments filed 11/10/2025 have been fully considered but they are not persuasive. With respect to the interpretation of limitations under 112(f), Applicant’s argument that the controller and scheduler are shown by the specification to be hardware is not convincing. Applicant’s specification states that, “The controller 320 may be implemented with hardware, software (or firmware), or a combination of hardware and software,” para. 0056 (as filed), and the claim does not recite any structure. For the scheduler, the specification is silent as to whether it constitutes hardware, software, or a combination of both, and the claim does not recite any structural limitations for the scheduler. Therefore, both terms are still being interpreted under 112(f). With respect to the 102 rejection, Applicant argues that Luo loads rows of the IFM into different PEs at later cycles, and does not, store, “only in the processing unit that first operated the reused element among the plurality of processing units.” This is true of Luo for some rows of the IFM, but not all; for example, the first row of the IFM is only loaded into one PE where it is reused. It would be improper to interpret the claim as requiring all reused elements to only be stored in the first PE that operated on that element; first, the claim language only requires “a reused element,” and also, some of the disclosed embodiments also show loading a same element into different PEs, as in fig. 9. Therefore, Luo still anticipates the claim. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew T. Chiusano whose telephone number is (571)272-5231. The examiner can normally be reached M-F, 10am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW T CHIUSANO/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Apr 15, 2022
Application Filed
Aug 06, 2025
Non-Final Rejection — §102
Nov 10, 2025
Response Filed
Feb 14, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596767
ACTIVE LEARNING DRIFT ANALYSIS AND TRAINING
2y 5m to grant Granted Apr 07, 2026
Patent 12591771
DYNAMIC QUANTIZATION FOR ENERGY EFFICIENT DEEP LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12561045
CONTENT-BASED MENUS FOR TABBED USER INTERFACE
2y 5m to grant Granted Feb 24, 2026
Patent 12547927
DETECTING ASSOCIATED EVENTS
2y 5m to grant Granted Feb 10, 2026
Patent 12541686
METHOD AND APPARATUS WITH NEURAL ARCHITECTURE SEARCH BASED ON HARDWARE PERFORMANCE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
83%
With Interview (+28.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 392 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month