Prosecution Insights
Last updated: April 19, 2026
Application No. 18/545,579

SYSTEM AND METHOD FOR CUSTOMIZABLE LARGE DATA LOADING

Non-Final OA §103§112
Filed
Dec 19, 2023
Examiner
VU, TUAN A
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Wells Fargo Bank N A
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
95%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
718 granted / 980 resolved
+18.3% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
31 currently pending
Career history
1011
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 980 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is responsive to the Application filed 12/19/23. Accordingly, claims 1-20 are submitted for prosecution on merits. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-6, 15-20 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs) As per claim 1, Yazicioglu discloses a system, comprising: one or more processors; and a loading system (see data importer from below; data importer – para 0101, 0129, 0131) executable on the one or more processors and configured to: receive as input a large data file (data importation … received electronic data file … file may be transformed into an XML formats by a data analysis – para 0022); partition the large data file into a plurality of smaller partitioned data files (XML, JSON files – para 0036; data streams … U.S. bank wire transfer transaction – para 0043; transform the data into one or more formats such as … XML, JSON – para 0054; transformed electronic files – para 0055; may transform a xlsx file to one or more files in .xml format … import the .xml file(s) into the data analysis system – para 0058 - Note1: transformer stage that converts a received file into one more more JSON, or XML files - .xml file(s) – para 0058 - passed from the importer/transformer to the analysis stage – transform data … into an XML format as required by a data analysis system – para 0022 - reads on partitioning a larger file into smaller partitioned files that are passed to the analysis system for aggregation and report that all the records together constitute the desired information – see para 0026); generate, for each partitioned data file, a data schema (JSON files – para 0036; para 0054) based on an automated analysis (imported into an analysis system – para 0022) of each partitioned data file (XML formats – para 0022; JSON files – para 0036; XML files, JSON files – para 0043; transformer may transform the data into one or more formats … XML, JSON – para 0054); load each partitioned data file into a data store (data included in the transformation file is capable of being imported into one or more of … databases - para 0044) external (source file 212[Wingdings font/0xE0] data detector 210[Wingdings font/0xE0]transformer 218[Wingdings font/0xE0] database 220 – Fig. 2C; data importer 130, uploaded to database 212 – para 0039; database or file system external to data importer 130 – para 0056) to the loading system (Note2: importer and transformer unit from which transformed data - as XML/JSON format - is uploaded or loaded to a recipient database reads on recipient database being external to the importer or transformer system that loads various information/data received from sources – see para 0043) based on the data schema; and wherein the loading system is provided as a user-configurable loading component (transformation template 640 – Fig. 6; user input … transformation template, data field mappings – para 0023; feedback to users on how the transformed file will look, histogram view, UI cues to show errors – para 0024; I/O devices … provide users with … capability to input data and instructions to data importer – para 0038; input received from a user … associated properties that the user may input with regard to the uploaded file, importer GUI that allows a user to select …data files … to be uploaded, prompt the user to complete one or more form fields associated … files to be uploaded – para 0039) used for developing a computer program (users … are provided with the capability to write software … by building out … transformations for the data stored – para 0024; computer program, method 500, data importer 130 – para 0101; Fig. 2B; method 900, computer programs, data importer 130 – para 0129; Fig. 2C; method 1000, computer program, data importer 130 – para 0131; Fig. 2D). Yazicioglu does not explicitly disclose loading system in terms of cloud loading system. Aya discloses cloud-based system for executing command per permission-based requests associated with loading or unloading of data (cloud storage provider system 104-1 - Fig. 3) to storage location pertinent to cloud provider system ( para 0032-0034-0036) Underwood also discloses migration of data associated with containers of respective tenants in a cloud system (col 24 li 13-29) by which migration of data between containers can be initiated via established connection between cloud tenants so that validation of the data can be performed once the data is migrated, including embodiment such as legacy systems and collaboration tools of a cloud-based platform (col. 24 li. 34-41) in which data may be migrated via cloud environments (col. 25 li. 51-53) from cloud storage or NAS (col. 28 li. 10-18), the cloud storage (col. 33 li. 11-25) for use with data migration including data warehouse or database (col 34 li. 30-42) for receiving transformed/converted data to make it compliant with the target system. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement transformation and conversion of business data in Yazicioglu so that transformed data or xml converted schema versions thereby can be subjected to a cloud loading or storage migration as set forth in Aya and Underwood, because cloud-based systems support a large variety of applications, multi-tiered infrastructures, business, entrepreneurial business/companies and networked services and large scope and type of tenants, which in turn can use cloud-based tools, subsystems and microservices and NW devices to establish social communication and connectivity among the subscribers or tenants, using validation and permission check and format compliant transform mechanisms to allow information passing, as well as data migration between respective tenant or user’s environments or data store such as evidenced in Aya and Underwood from above. B) Yazicioglu does not explicitly disclose processors in loading and partitioning system being configured to generate a control file for each partitioned data file containing a record count; validate that the external data store has received all records in each of the partitioned data files based on the control file. Yazicioglu discloses generating metadata in the data transformer stage once the data files have been transformed so to store information(time, date of transformation) regarding the transformed one or more transformed data files (para 0055) as well as metadata generated via data updater associated with replacing a source file in the database with one of the transformed files from the importer stage, where metadata associated with the update include redacted version, case file identifier (para 0066-0067); hence, generating metadata indicative of time/date of transforming a file (destined for loading) and indicative of a file ID, version (destined for a DB replacement) entails generating meta- information specifying time of a transform process and version, ID for each file undergoing upload/replacement process. Use of structure to record metadata associated with loading and migration of files into a repository or storage is shown in Boggs warehousing or DB loading in that, extracted data from various sources are loaded to warehouses (or databases) using routing and aggregation techniques (para 0047) associated with OLAP analysis (para 0055), where metadata, conditions for transformation and rules defined for each content destined for extraction-loading is laid out in the control structure (e.g. Fig.13, 15), which is being used by the loading process to ensure that the extracted is complete for further processing (para 0013-0015), the control structure including metadata elements (Fig. 10) number of class instances comprising a business rules (para 0093), unique ID, handling rules and/or cross-references of source content to be transformed and loaded (para 0098) and locations in storage where the data is to be housed, to ensure for each content, that verification process is being applied for each extracting and loading to be consistent with conditions, rules in the control structure(para 0101-0102), in regard to observing time bandwidth usage, time intervals for measurements (Fig. 13) set with the control structure in accordance with section field, time period field, status field and last incoming field set therein (Fig. 15) to ensure that the warehouse receives consistent and reliable data feeds (para 0123) over predetermined schedule thereby allowing network data to be completely received before further processing thereof is started (para 0010). Hence, provision of a control structure/file to preregister identification and time requirements, arrival status for each content to ensure that extraction and loading of each content to be fully complete prior to the content being eligible for further processing by the warehouse or database is recognized. Underwood discloses legacy/source system dashboard control file(s) in csv format with naming conventions ingested into a migration system to enable tracking and traceability of data files destined for a migration between legacy/target systems (col. 18 li. 46 to col. 19 li. 2) where data load and movement control is provided as metadata recording metrics of a given data set, the volume thereof to be verified against expected volume from control files, the latter mirroring a receipt describing a package destined for migration which is defined by said metadata; e.g. including the time of the data load, job name, target system table, record counts or summable amounts, all part of control data collected and transmitted during a migration process, then pulled with the loading for reporting(col. 19 li. 43 to col 20 li. 10) via provision of a control dashboard enabling tracking of control file totals across stages of the migration until the migration culminates at respective targets (col. 20 li. 11-19), such control framework having transformation to format compliant with target system, to prepare data for ingestion thereto, using control feedback to flags against deficiencies in the data transfer, to ensure normalization between source and target system in that all transferred data are accounted for without deviation or gap, as part of the reconciliation effect of the dashboard throughout the migration (col. 21 li. 47 to col. 22 li. 29); i.e. ensuring and regulate that all data from source/legacy systems migrate or are properly converted to a target system (col. 22 li. 61-65) under the control metrics and metadata capture as part of the control file, which is used to map metrics between a legacy system and a target system in association with high capacity and high volume migration (col. 23 li. 16-25) Hence, reconciliation dashboard using control file(s) configured to ensure migration of data between source and target system to be all accounted for via comparing metadata metrics recorded from source data with expected metrics in the control file entails validation under a control file of a migration completion state so to detect gaps or discrepancies in the transfer - compare recorded migration volumes against expected volumes - to ensure that all individual data expected to migrate are accounted for and transformed compliant to the target system. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the data transformation and format compliant transfer to target repository in Yazicioglu so that processors in loading and business data transformation and reformatting system are configured to 1) generate a control file for each partitioned data file containing a record count or data volume – as in Underwood registering of expected volume in a control file; 2) validate that the external data store – as in Boggs warehouse or target systems in Underwood - has received all records – e.g. for a complete migration check via use of control structure in Boggs, and the dashboard verification by Underwood - in each of the partitioned data files – e.g. transformed files in xml or json format in Yazicioglu - based on the control file attached with and used in the validation of migration volume – e.g. per Underwood dashboard control file for reconciliation of source and target volumes under migration; because data received into a importer/loading system as in Yazicioglu can be subjected to unpredictable, and numerous types of alteration, reformatting during transition between various target or recipient environments and in view of the likely disparity or discrepancies between nature, state or volume of data associated with their respective source and their recipient context, a control in form of pre-configuration setting – as a control structure or file - being imparted or attached to the course of migration of file or volumes, notably those subjected to transformation, (or reformatting for compliance with recipient target repository as set forth in Yazicioglu) would not only enable original state, count, identification of each individual file prior to its migration or loading to be registered/captured as reference for use with metrics correlating with a corresponding control file – as per Underwood or Boggs - attached to each individual file slated for a runtime migration or a loading to target storage, but would also ensure via runtime application of the control file, that all and proper data, ID or size after a completed migration would be consistent with expected volume, ID and state; where the integral state of each migrated or loaded data as achieved from a controlled migration would facilitate formation a reliable ensemble deemed suited for further processing at the recipient environment or re-assembling, reconstruction of decomposed, partitioned parts obtained from the importer stage of the migration as set forth in Yazicioglu. As per claim 2, Yazicioglu discloses system of claim 1, comprising an integrated development environment (IDE – users … are provided with the capability to write software … by building out … transformations for the data stored – para 0024; Figs 7-8 – Note3: environment equipped with UI enable users to interact with external sources, to select, input, configured and write software reads on IDE) including a graphical user interface (GUI; Figs 7-8) configured to: display a plurality of user-configurable components (transformation template 640 – Fig. 6; user input … transformation template, data field mappings – para 0023; feedback to users on how the transformed file will look, histogram view, UI cues to show errors – para 0024; I/O devices … provide users with … capability to input data and instructions to data importer – para 0038; input received from a user … associated properties that the user may input with regard to the uploaded file, importer GUI that allows a user to select …data files … to be uploaded, prompt the user to complete one or more form fields associated … files to be uploaded – para 0039) including the user-configurable cloud loading component (refer to rationale A of claim 1); and receive a user selection placing (input data and instructions to data importer – para 0038; input received from a user … associated properties that the user may input with regard to the uploaded file, importer GUI that allows a user to select …data files … to be uploaded – para 0039) the user-configurable cloud loading component into the computer program (Fig. 2B; method 900, computer programs, data importer 130 – para 0129; Fig. 2C; method 1000, computer program, data importer 130 – para 0131; Fig. 2D), wherein the computer program (users … are provided with the capability to write software … by building out … transformations for the data stored – para 0024) is developed using the IDE. As per claim 3, Yazicioglu discloses system of claim 2, wherein the IDE is further configured to: display an input connector (graphical display … hyperlinks for data importation on the interactive GUI – para 0072) on the user-configurable cloud loading component for connecting to one or more among the plurality of the user-configurable components (refer to claim 1) to receive the large data file (e.g. spreadsheet file, .xml file(s) – para 0058); and display a dialog box for the cloud loading component to receive user-entered parameters including log file locations (receive source … data files …. IP logs from service providers – para 0036, 0043; logs stored in a text file – para 0059), credentials for accessing the external data store (access permissions assigned to each user may be provided to the analysis systems or databases to which imported electronic data are sent … limiting user access … to the imported data to only which they have been given access – para 0040), or a combination thereof Yazicioglu does not explicitly disclose user-entered parameters as including block size, control file locations, or a combination thereof. Provision of size or volume of each content to be loaded or transferred to external storage or migration target is shown in Underwood’s interactive legacy dashboard for users to specify parameters (e.g. container volume ) for migration of data between legacy and destination system, such as volume (size of a particular dataset – col. 34 li. 64 to col 35 li. 1) as part of metadata (loading a user list into a migration tool - col. 24 li. 16-25) metrics for use to correlate with expected quantities set in a control file (col.19 li. 43-51), with location for data (structured data may include … addresses, stock information, and geolocation – col. 35, li. 54-61) to migrate as specified or listed by the user (list into a migration tool - col. 24 li. 16-25), said structured data stored at a location and attached with a control file that travels with such packaged data (col. 26 li. 45-61); hence specifying location of control file along with geolocation of data to migrate is recognized. Therefore, based on role of user interacting with a legacy Dashboard UI for inputting properties and data necessary for the importer operations, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the Dashboard collection of user data so that user-entered parameters to configure data import and cloud loading would also include block size, control file locations in terms of a) volume or size of individual data set to be transformed for compliant transfer/loading to external storage, and b) location of where a corresponding “control file” can be attached and invoked for a verification process – as in Underwood - that accounts for complete transfer of all needed volumes as intended by the user specifications via interactions with the cloud loading Dashboard UI, or entry fields presented thereby; because criteria as part of metadata entered from user to set up conditions, qualitative or metric requirements for an actual migration state of a given dataset to be correlated with expectations established in a corresponding control file pre-configured for the dataset as set forth above, so to validate correctness and satisfactory state of the migration or transfer would not only allow identification and fetch of a correct control file whose invocation within loading runtime of a given data set can yield validation result that are specific to with the intended dataset as well as setting entered by the user but would also consolidate the validation outcome using the control file in regard to comparing expectations and actual metrics captured during loading of a given dataset; that is, for given volume or size set as a metrics, criterion entered from the outset, a respective control file associated with the corresponding dataset would be able attest and confirm that all volume and data size set for the loading have been accounted for during the transfer, and aggregating cumulative validation results from running all individual dataset loading and a corresponding control file as set forth above, would facilitate determination of success or failure status of the overall data loading or migration as intended with dashboard user input in coordination with pre-established metrics in control files. As per claim 5, Yazicioglu discloses system of claim 2, wherein the GUI is configured to display the plurality of configurable components (refer to claim 1) including the configurable cloud loading component in a treeview (tabular canonical format may transform an XML tree structure into rows and columns … user may … identify which tree elements are to be assigned to row …tabular data structure and which sub-elements are to be assigned to columns of the tabular data – para 0081) control. As per claim 6, Yazicioglu discloses system of claim 1, wherein the data schema is a JavaScript Object Notation (JSON)-based data schema (XML files, JSON files – para 0036, 0043; transformer may transform the data into one or more formats … XML, JSON – para 0054). As per claims 15-16, Yazicioglu discloses system of claim 1, wherein the data store comprises a cloud-based data store storing data (refer to rationale A of claim 1) accessible by a plurality of entities (refer to claim 16; para 0028, 0046-0047). wherein the plurality of entities comprise financial entities (banks – para 0028), regulatory entities (government, law enforcement, security agencies – para 0028), private entities (insurance companies,non-profit, educational institution, research groups – para 0028), or a combination thereof. As per claim 17, Yazicioglu discloses a non-transitory machine-readable medium for storing instructions that, when executed by a computer system, cause the computer system to perform operations comprising: receiving as input a large data file; partitioning the large data file into a plurality of smaller partitioned data files; generating, for each partitioned data file, a data schema based on an automated analysis of each partitioned data file; generating a control file for each partitioned data file containing a record count; loading, via a cloud loading system, each partitioned data file into a data store external to the cloud loading system based on the data schema; and validating that the data store has received all records in each of the partitioned data files based on the control file, wherein the cloud loading system is provided as a user-configurable cloud loading component used for developing a computer program. (all of which having been addressed in claim 1) As per claim 18, Yazicioglu discloses non-transitory machine-readable medium storing instructions of claim 17, comprising further operations to: displaying, via a graphical user interface (GUI) included in an integrated development environment (IDE), a plurality of user-configurable components including the user- configurable cloud loading component; and receiving, via the GUI, a user selection placing the user-configurable cloud loading component into the computer program, wherein the computer program is developed using the IDE. (all of which having been addressed in claim 2) As per claim 19, Yazicioglu discloses a method, comprising: receiving as input a large data file; partitioning the large data file into a plurality of smaller partitioned data files; generating, for each partitioned data file, a data schema based on an automated analysis of each partitioned data file; generating a control file for each partitioned data file containing a record count; loading, via a cloud loading system, each partitioned data file into a data store external to the cloud loading system based on the data schema; and validating that the data store has received all records in each of the partitioned data files based on the control file, wherein the cloud loading system is provided as a user-configurable cloud loading component used for developing a computer program. (all of which having been addressed in claim 1) As per claim 20, refer to rejection of claim 2. Claims 4 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs) further in view of KR 101907422 (translation), 10-12-2018, 9 pgs (herein ‘422) As per claim 4, Yazicioglu does not explicitly disclose system of claim 3, wherein the IDE is further configured to compile the computer program into an executable program that is executable on the one or more processors. Use of a development environment equipped with UI and user input as in Yazicioglu to import and transform data, and parameterize its loading and develop programming code therefor is further evidenced in ‘422 integrated development apparatus wherein a storage unit environment, input module, management unit and object code unit operate to respectively classify and store format, manage attributes information (pg. 3) and generate intermediate representation based on a graphical or description language source (pg. 4) then compile packet frames thereof into object code compliant with HW of the destination storage, the generated executable to be loaded or uploaded onto HW of the storage unit (pg. 4) via a code loading module (pg. 5). Hence, IDE equipped with compiler to convert managed configuration, attributes of transformed data/format into object code compliant with HW of a storage system is recognized. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the code generating by users in Yazicioglu’s importer environment so that retrieval and retransformation of data, coupled with user parameterization and code writing thereof underly the flow steps of a integration environment equipped with UI to support code generation, the latter as evidenced in the IDE by ‘422 by which parameterization of transformed format (from description language source) is converted via a compiler into object code compliant for use or executed by processor at the destination target; because Use of a compiler associated with a integration environment such as a IDE operating with receipt of data source, converting it into another format, coupled with UI and user input to manage transformation and parameterization and representation of the changes, and translate this representation into object code via use of compiler would not only enable the developing users to deploy respective data configurations or transformations as observed, manipulated and managed using integration capabilities and tools of the IDE into a programmatic representation that is translatable into binary form designated for execution of processing HW at the destination environment; but would also integrate user input, setting with language compliancy and requirements of the programming language in which user code is written in order to instruct a pertinent compiler to render object code that would particularly suit with a native context of target environments in which the generated code is to be deployed. Claims 7 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs), further in view of Yang et al, CN 113489672 (translation) 05-17-2022, 12 pgs (herein Yang) and Khan, USPubN: 2021/0373860 (herein Khan) As per claim 7, Yazicioglu does not explicitly disclose system of claim 6, wherein the JSON-based data schema includes data types comprising null, Boolean, int, long, float, double, bytes, string, record, enum, array, map, union, fixed, Names, Namespaces, or a combination thereof. Khan discloses a compilation platform in form of RestAPI where parsing YAML document provide Swagger specifications to be imported, loaded into a intent compiler, the YAML package expressed in JSON format where JSON constructs thereof include object classes, string type (para 0115, 0118-0119; para 0070, 0325) with metadata mapped from the JSON dictionary(para 0122) for bridging with the API calls handled by a API Bridge, the Swagger/YAML constructs including constructor to handle null-dereferencing(no-null-object option - para 0124), Boolean (para 0186), int, double (para 0082) float (para 0083), record or enum (para 0083, 0087, 0307, 0325), Names (para 0086), array, byte, map (para 0070, 0083, 0087), constant/fixed (implemented as constants – para 0112; variable or constant – para 0258, pg. 33), Namespaces (para 0104, 0278, 0310). Yang discloses use of a filtering, sniffer module (see Abstract) in combination with extracting JSON Hyperschema as standards proposed for use in filtering, classifying API calls on a sharing platform for permission control and error correction (pg. 3), where integration and provision of services of call intercepting is implemented via REST API mainstream responsible for data retrieval and filtering, the JSON schema providing standards and fields to validate Requests and Responses classes, in terms of types and definitions of primitives, class objects that include Nulldef, ArrayDef, UnionDef, array type (bottom pg. 8, top pg. 9); i.e. the schema providing Union, Boolean, number, string, null, composite, array type (bottom pg. 5, top pg. 6) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the XML and Json files in the importer environment of Yazicioglu so that this particular markup specifications include definition standards and types for use in implementing source code or API calls, the type including Boolean, int, long, float, double, bytes, string, fixed, map, Names, Namespaces (as in Khan) enum, array, map, union, null, (as in Yang), or a combination thereof; because These types when specified, defined from JSON, XML based format as reference standard or rules can extracted and applied into a development context according to which, programmatic formation and parameterization of API, method calls – as set forth in Yang and Khan - can be made compliant with known and accepted standards and implemented with minimized risk of runtime fault and type violation. Claims 8-9 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs), further in view of Lund, USPubN: 2008/0212845 (herein Lund) and Blagay et al, USPubN: 2021/0149684 (herein Blagay) As per claims 8-9, Yazicioglu does not explicitly disclose system of claim 1, (i) wherein generating, for each partitioned data file, the data schema based on an automated analysis of each partitioned data file comprises reading one or more records in each partitioned data file to identify an overall file layout and data types of the records and creating the data schema based on the overall file layout and data types. (ii) wherein reading one or more records comprises reading a file header for each partitioned data file, the file header comprising a metadata describing a layout for each partitioned data file. Blagay discloses markup language representing bundle entries for use in mapping tiles, blocks representation of database records (Abstract; Fig. 3), the bundle (para 0040, 0051) having schema format with rules and types specified in header fields and metadata tags, including listing in header layout (table 1, pg. 9) for indexed entries of encompassed meta-information, where some of the metadata is provided in Json format to bind rules of the bundle with the SQ records (SQLite schema, metadata.json – Table 1, bottom pg. 6). Hence, reading a schema format – e.g. metadata.json -- having rules, types specified in tags and header layout to express entries, definition, meta-information representing a reference bundle for use in mapping target tiles representation of DB records is recognized. Lund discloses form template (para 0014) in terms of XML provided initially in blank file(para 0015) serving a layout indicative of locations or fields with which to enter, populate data or definitions of the data itself (para 0021); e.g. group and variables, font, values, name and types of the data entered as defining characteristics for the relevant field/tag in the layout data definition (e.g. para 0032-0034), the XML structure used to define major blocks of the form such as header, a main body, data fields and their layout within the form (para 0053), such as orientation found in the header fields (para 0054). Hence, markup template provided as file in XML form containing fields indicative of locations at which to populate data (and definition thereof) following a layout or orientation which can be found in header fields is recognized. Therefore, as files derived from a received electronic file and/or transformed by the importer context of Yazicioglu (para 0036, 0043) can be in xml or Json form, it would have been obvious at the time of the invention for one skill in the art to implement transformation into a plurality of files or xml/json formats (see para 0058) so that said electronic file partitions are basically formatted with fields and tags in accordance to schema/markup layout or orientation for data or definition to be populated, each such markup or schema formats (xml, json) representative of one or more records of the derived files, each having header layout – per Blagay – listing of header fields to identify an overall file layout and data types of the records – as set forth in Lund – such that creating one such schema file would be based on the overall file layout and data types by reading entries of a corresponding file header so to obtain a metadata describing a layout for each partitioned data file; because information depicting or listing content of a file expressed in a schema or markup format typically uses layout and tag hierarchy that follows a layered direction and orientation underlying this markup methodology, and by organizing one such schema file with header portion, body portion and tail portion, useful information can be obtained from parsing information found in each such file-organized portion, notably when header of such file support meta-information indicative of a schema listing or its indexed content, in the sense that this listing provides indexed entries indicative of the layout – as set forth in Lund forming of XML template - in which the very file contents is presented, facilitating thereby localization or search of a desired field topic or tagged element; e.g. via a schema parsing operation. Claims 10-11 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs), further in view of JP 6504190, (translation), 04-24-2019, 22 pgs (herein ‘190) As per claims 10-11, Yazicioglu does not explicitly disclose system of claim 1, wherein the cloud loading system is configured to load each partitioned data file into the data store via serialization; wherein serialization (from claim 8 – refer to USC 112 Rej.) comprises converting a data object into a series of bytes that saves a state of the data object. ‘190 discloses information processing using a control device with function of collecting and storing data to a target such as a time series database (pg. 2), where the series data are observed values collected continuously from a arbitrary process (pg. 3), where the control device is equipped with database writing program to writes data into the time series DB (pg. 6, bottom) and a serialization communication program for converting the time-series data into storage byte string for use with a database writing program (pg. 7), where the observed values as input from field processing can be state values acquired from sensor and measurement signals (pg. 12); hence converting process data in series of bytes via serializing and representing a saved state for input/observed data by a controller configured with a database writing program to write the time-series data to the database is recognized. Therefore, it would have been obvious at the time of the invention for one skill in the art to implement writing of transformed data to a database or repository in Yazicioglu underlying a cloud loading system (per rationale A in claim 1) so that the DB loading system is configured to load each partitioned data file into the data store via serialization, which comprises converting a data object into a series of bytes that saves a state of the data object - as shown in ‘190 for loading time series data to a DB; because serializing raw data into byte format as a technique integrated to the processing of the data such as signals received via the cloud-based entities or NW processing devices would facilitate smoother and more continuous of the received signals by computer systems, notably when the received signals potentially come in rather analog or disparate type of format that is otherwise non-compatible with the digital processing aspect of modern computers. Claims 12 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs), further in view of Ma Ruirui et al, CN 113377815A, (translation), 9-10-2021, 5 pgs (herein MaRrui) As per claim 12, Yazicioglu does not explicitly disclose system of claim 1, wherein the cloud loading system is configured to automatically restart the load of each partitioned data file into the data store if communications are interrupted with the data store. Provision of a auto-triggered mechanism to overcome sudden stoppage, interruption and accidental glitch disrupting operational flow of a software runtime is illustrated in the processing of data by MaRrui; that is, a pause or interruption of flow incurred with a breakpoint due to a snapshot send/receive in association with a database writing is set to auto-resume without stoppage (see automatic breakpoint resume - Abstract) to ensure completeness of data writing performance in the course of writing or storing data to a database Therefore, as various cause for inadvertent interruption can affect the continuous flow of data loading or storing a data-store, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the writing operation in Yazicioglu’s cloud loading system so that a automated self-resume – as shown in MaRrui - is added to the cloud loading process to activate in automatically restart the load of each partitioned data file into the data store ( if communications are interrupted with the data store) because a self-restart as set forth above can remedy to potential degradation in real-time on application performance in regard to loss in power and sudden interrupt affecting the flow of operations required to sustain uninterrupted write/flow to target store addresses (e.g. database writing), the resuming made automatic in response to a minor pause as set forth above not only enabling continuity of a writing process, and potentially precluding imposition of a true system halt by which affected processes are forced to halt/relinquish their runtime and resources in favor of a system-level recovery routine; but also averting additional adjust/correction made at low programmatic level to the address pointer/dereferencing caused by hung up I/O operations, thereby assuring smooth performance and operational memory stability of the target datastore. Claims 13-14 is/are rejected under § 35 U.S.C. 103 as being unpatentable over YaziCioglu et al, USPubN: 2018/0210935 (herein Yazicioglu) in view of Aya et al, USPubN: 2024/0427755 (herein Aya), Underwood, Jr et al, USPN: 12/321323(herein Underwood) and Boggs, USPubN: 2006/0209691 (herein Boggs), further in view of Zhang et al, CN 106682082 (translation), 03-26-2021, 12 pgs (herein Zhang) and LV, Zhi-hui et al, CN 110717825, (translation), 04-07-2023,12 pgs (herein LV-Zhui) As per claims 13-14, Yazicioglu does not explicitly disclose system of claim 12, wherein the cloud loading system is configured to read a log file of loading operations and the control file to determine records in each partitioned data file that have not yet been loaded. wherein the cloud loading system is configured to continue loading the records in each partitioned data file that have not yet been loaded. Use of a log to record writing of business, distributed information into block chain storage is shown in LV-Zhui cloud computing resource management; that is, transmitting of data accounting for block chain storing of transaction information between supplier and server is shown as writing transactional data into a log at a front-end processor (pg. 3, bottom, pg. 4) where use of log is to test if performance of blockchain writing is satisfying a service requirements (pg. 6), where the tracking the log enables extraction of task IDs and detecting whether the last entry (by a thread group) fulfills the length expected from consulting the log, whether the last logging may not be complete or whether the last supplier/demand transaction have not occurred (pg. 8) Tracking flow of data write to DB is shown in Zhang as using a SQLite control (pg. 5) in form a control file to track state of the writes into a database, to judge based thereon whether all the files are written to the database, each write instance recorded in the file with a mutual exclusion lock (bottom, pg. 6) and a mark position (pg. 7), the control file reading to track endpoints time elapse for each write, and ensuring that unwritten data if any, are made ready for a continuing write. Hence, logging write instances with a mutual exclusion lock recorded inside a control file reference to track whether a write instance fails to complete, identify if any unwritten data should be made ready for a continuing write and to judge whether all data have been written to a database is recognized. Thus, as control file can be used to record (per rationale B in claim 1) if the database has received all intended records or files, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement proper write operations in Yazicioglu database loading so that actual states of write for the loading are recorded with a control file or a log, whereby the loading process can be configured to read the log file -as per LVZhui - and/or the control file – as per Zhang - to determine whether recording of each partitioned data file that have been or not yet been loaded, the continuity of this loading process realized by consulting the logging or control file as set forth in LVZhui and Zhang, respectively to enable the loading process to identify unwritten record and accordingly proceed on loading of each and all partitioned data file that have not yet been loaded; because recording a data writing with a log and a control file that enable recordation of all load/write instance in the course of Yazicioglu’s database loading can serve as a tracking tool to be used during a ongoing or granular write pipeline or after each movement of files, records to a database in that a log file can establish how many write attempt has been recorded, with or without success status, in that a control file can provide evidence that the time expected for a write to complete has not occurred as expected, enabling identification of failure and/or computation of the number of unwritten data, which can be made ready for a next attempt to write, the tracking to achieve completion in perfecting this DB writing process falling under the ambit of an overall business purpose of maintaining and upkeeping databases for use towards various types of application endeavors. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 recites the limitation "wherein serialization comprises" in line 1. There is insufficient antecedent basis for this limitation in the base claims (claim 8, claim 1) For merits, this limitation will be treated as “processors” of claim 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan A Vu whose telephone number is (571) 272-3735. The examiner can normally be reached on 8AM-4:30PM/Mon-Fri. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Chat Do can be reached on (571)272-3721. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-3735 ( for non-official correspondence - please consult Examiner before using) or 571-273-8300 ( for official correspondence) or redirected to customer service at 571-272-3609. Any inquiry of a general nature or relating to the status of this application should be directed to the TC 2100 Group receptionist: 571-272-2100. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Tuan A Vu/ Primary Examiner, Art Unit 2193 January 07, 2026
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596557
SYSTEM AND METHOD FOR GENERATING RECOMMENDATIONS FOR DATA TAGS
2y 5m to grant Granted Apr 07, 2026
Patent 12591718
Application Development Platform, Micro-program Generation Method, and Device and Storage Medium
2y 5m to grant Granted Mar 31, 2026
Patent 12585573
ASSEMBLING LOW-CODE APPLICATIONS WITH OBSERVABILITY POLICY INJECTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582796
METHODS, DEVICES, AND SYSTEMS FOR IMPROVED OXYGENATION PATIENT MONITORING, MIXING, AND DELIVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12541384
COMPONENT TESTING FRAMEWORK
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
95%
With Interview (+21.4%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 980 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month