Prosecution Insights
Last updated: April 19, 2026
Application No. 17/673,689

COMPUTATIONAL OBJECT STORAGE FOR OFFLOAD OF DATA AUGMENTATION AND PREPROCESSING

Final Rejection §103
Filed
Feb 16, 2022
Examiner
BLUST, JASON W
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
220 granted / 277 resolved
+24.4% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
24 currently pending
Career history
301
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
23.8%
-16.2% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 277 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 9/11/2025 have been fully considered but they are not persuasive. The applicant alleges on pages 7-8 of the remarks that the prior art of Narayanaswamy doesn’t disclose or suggest a system configuration where the preprocessing happens at the external storage, and that the prior art of Bazarsky fails to cure these deficiencies. There is no doubt that Bazarsky cures this deficiency. ¶34 clearly addresses the issues addressed by the applicant, and states “machine learning may be implemented using processing components that are integrated with the memory components where the data to be processed is stored, i.e. using “near memory” computing, so as to reduce the need to transfer large quantities of data from one component to another. “ Fig. 1 and ¶35 shows a storage device capable of performing data augmentation (preprocessing). Narayanaswamy also clearly suggests offloading processing and reducing data transference in Fig. 6 and C15:28-55, as the remote storage nodes 640 can perform remote requested operations 632 and provide results back to a compute node, and “nodes 640 may filter, aggregate, or otherwise reduce or modify data… in order to lessen the data transferred”. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Narayanaswamy (US 11,636,124) in view of Bazarsky (US 2020/0401344). In regards to claims 1, 12, and 21 taking claim 1 as exemplary Narayanaswamy teaches A system for execution of a distributed application, comprising: (fig. 1A) a processor node to generate a request for data from an external storage and perform iterative processing on the data to train a machine learning model (fig. 1A, C5:11-35, C6:24-54 teaches a machine learning model creation system 110 and leader node 105 (processor node) of database system 100, which can request prepared training data, which is then used to train the model (i.e. iterative processing of the data), see also fig. 6, C15:1-55 where a compute (processor) node sends a request for remote operations 632 to a remote (external) storage node ) the external storage having multiple storage nodes (C14:55-67 and fig. 5 teaches storage nodes (external storage) can comprise multiple disk (i.e. multiple storage nodes) wherein the processor node is to generate and send the request to the external storage, the request including a command that identifies specific request for a type of preprocessing on the data, (fig. 6, C15:1-55 teaches that the request 632 is to perform remote operation on the data stored at the storage node 640) the request to trigger a selected storage node to read the data, preprocess the data with the processor device of the storage node to perform data transformation on the data, and provide preprocessed data to the processor node for the iterative processing. (C6:24-54 teaches nodes 640 may filter, aggregate, or otherwise reduce or modify data (i.e. read and preprocesses data) and send the results 632 back to the compute node and further to the leader node, either of which can perform further processing on the data.) Narayanaswamy, may not explicitly teach that the multiple disks (i.e. multiple storage nodes) each have a processor device and may not also explicitly teach the claimed system in a single embodiment, or that a single processor node generates and also performs the iterative processing to train the model. However, Narayanaswamy does teach in C5:11-35 that that the model creation system 110 can be part of the database system and executed on a computing resource. C25:12-24 also teaches that systems and methods can be implemented by hardware/software and that the order of any of the methods may be changed and that various elements may be changed, and various elements maybe be added, reordered, combined, omitted, modified. Bazarsky teaches in fig. 1 storage device SSD (i.e. storage node) contains a controller 108 and machine learning controller 116 (i.e. a processor device) that are capable of controlling data augmentation on stored data objects) As such, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have taken the teachings and suggestions of Narayanaswamy and the teachings of Bazarsky to replace the storage devices of Narayanaswamy with those described by Bazarsky and be able to successfully modify the system to have the same components and work exactly as the claimed system. One would have been motivated to modify this system, as a more specialized version of the system can be created to more efficiently performed the desired operations. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006) (see also MPEP 2143 part I, section G) In regards to claim 2 and 18, Narayanaswamy further teaches wherein the processor node includes a software stack with application programming interface (API) extensions preprocessing. (C5:11-35 teaches the use of an API, and C10:58-C11:25 teaches the use of a software protocol stack to establish connections (i.e. generate commands and receive data) between the various components that are part of the network. ) Narayanaswamy may not explicitly teach to generate commands to trigger preprocessing operations by the external storage and to trigger hints or metadata to store data for subsequent Bazarsky teaches to generate commands to trigger preprocessing operations by the external storage and to trigger hints or metadata to store data for subsequent (¶51 teaches that augmented data is can be stored for later use, and ¶95 teaches that augmentation components can respond to commands to obtain, process, send/store data) (i.e. as data can be commanded to be stored for later use, commands must include hints/metadata to indicated such). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have been able to incorporate the teachings of Bazarsky to improve the system of Narayanaswamy by incorporating aspects for storage devices to receive commands to trigger preprocessing operations and whether or not to store such preprocessed data for later use. The motivation for making this modification is that data can be modified and stored prior to it being required by the ML model, and can be ready faster than having to perform such modifications “on-the-fly”. In regards to claims 3 and 19, Narayanaswamy further teaches wherein the software stack includes a training application to specify preprocessing functions to offload to the external storage and data storage hints or metadata to external storage. (C5:11-35 teaches the use of an API, and C10:58-C11:25 teaches the use of a software protocol stack to establish connections (i.e. generate commands and receive data) between the various components that are part of the network. C6:24-54 teaches that the requests can contain the preprocessing operations (i.e. metadata/hints) to be performed on the data) In regards to claim 4, Bazarsky further teaches wherein the commands further trigger hints to store data in preparation for the subsequent preprocessing. (¶53 teaches that an initial set of data can be received and stored for subsequent augmentation (preprocessing) In regards to claims 5 and 13 Bazarsky further teaches a computational object storage system as the external storage. (fig. 1 storage (external) from the host that contains a controller 108 and machine learning controller 116 that are capable of controlling data augmentation on stored data objects(i.e. it is a computational object storage system). In regards to claim 6, Bazarsky further teaches wherein the external storage is to store the data on a single storage node for subsequent preprocessing. (fig. 4, teaches that an initial set of images (data) can be stored on a single NAND array (single storage node) In regards to claims 7 and 14, Bazarsky further teaches wherein the storage node is to perform on-demand preprocessing in response to the request for the data. (¶51 teaches that the augmented data may be transient and is discarded after being used (i.e. it is on-demand short term data) In regards to claim 8 and 15 Bazarsky further teaches wherein different storage nodes of the multiple storage nodes are to apply different on-demand preprocessing operations based on data requested. (¶33 teaches that there are a plurality of different augmentation (preprocessing operations) that can be performed.), as the modified system would contain many of these devices, they could each be commanded to perform difference preprocessing operations in parallel.) In regards to claim 9, Narayanaswamy further teaches wherein the request for the data comprises a data request command to indicate preprocessing operations to perform on the data prior to providing the preprocessed data to the processor node. (C6:24-54 teaches that the request can indicate the preprocessing operations to be performed) In regards to claim 10, Bazarsky further teaches wherein the external storage is to store data in the storage nodes in response to a data store command, where the data store command is to indicate to the external storage how to store data to support application of preprocessing. (¶50 teaches that the storage system can take ECC and BER information into account for deciding where to store the data in order to support the preprocessing of data that includes adding noise into images.) In regards to claims 11 and 20, Narayanaswany may not explicitly teach wherein to preprocess the data includes one or more of: image decoding; image resizing; execution of a random crop; execution of a random horizontal flip; normalization of an image to preconfigured image parameters; or, image transposition. Bazarsky further teaches in fig. 3 step 306 that altered versions (i.e. preprocessing of data) of images can comprise, rotating, translating, skewing, cropping, flipping, and adding noise) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the teachings of bazarsky to improve the system of Narayanaswany to include specific preprocessing of images. This improves the system as it allows for enhanced training of machine learning algorithms that require images for training data. In regards to claim 16, Narayanaswamy further teaches wherein the storage controller comprises a controller external to the storage nodes, or a controller distributed on the storage nodes. (C14:55-67 teaches the storage nodes may be implemented as a RAID (i.e. a RAID controller external to the storage disks (storage nodes). C25:12-24 teaches that systems and methods can be implemented by hardware/software and that the order of any of the methods may be changed and that various elements may be changed, and various elements maybe be added, reordered, combined, omitted, modified. As such, the storage controller can have a controller external to the storage nodes, or a controller distributed on the storage nodes.) In regards to claim 17, Narayanaswany may not explicitly teach wherein the storage controller is to store data in the selected storage node in response to a data store command, where the data store command is to indicate how to store data to support application of preprocessing. Bazarsky in ¶50 teaches that the storage system can take ECC and BER information into account for deciding where to store the data in order to support the preprocessing of data that includes adding noise into images. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the teachings Bazarsky to modify the system of Narayanaswany such that when the indicated preprocessing is for adding noise into data (images) to then store the data in areas with higher BER and/or removing ECC from those areas. The motivation of such is that the data when read from the device will have already been transformed into the desired “noisy” variant just by the virtue of where it was stored on the drive, saving other processing from having to perform this operation. In regards to claim 22, Narayanaswamy further teaches wherein generating the request comprises identifying a preprocessing operation to perform on the data by the storage node. (C4:32-60 teaches that machine learning model may have the preprocessing operations that are required to be performed on the data embedded as part of the machine learning model creation system.) EXAMINER’S NOTE Examiner has cited particular paragraphs, figures, and/or columns and line numbers in the references applied to the claims above for the convenience of the Applicants. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the Applicants in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON W BLUST whose telephone number is (571)272-6302. The examiner can normally be reached 12-8:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at (571) 272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON W BLUST/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Feb 16, 2022
Application Filed
Apr 12, 2022
Response after Non-Final Action
Jun 09, 2025
Non-Final Rejection — §103
Sep 11, 2025
Response Filed
Nov 21, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596485
HOST DEVICE GENERATING BLOCK MAP INFORMATION, METHOD OF OPERATING THE SAME, AND METHOD OF OPERATING ELECTRONIC DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12554417
DISTRIBUTED DATA STORAGE CONTROL METHOD, READABLE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12535954
STORAGE DEVICE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12530120
Maximizing Data Migration Bandwidth
2y 5m to grant Granted Jan 20, 2026
Patent 12530118
DATA PROCESSING METHOD AND RELATED DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
96%
With Interview (+16.2%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 277 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month