Prosecution Insights
Last updated: April 19, 2026
Application No. 18/642,571

METHOD OF IMPORTING DATA TO DATABASE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Final Rejection §101§103
Filed
Apr 22, 2024
Examiner
ADAMS, CHARLES D
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
5y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
187 granted / 423 resolved
-10.8% vs TC avg
Strong +44% interview lift
Without
With
+44.2%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
32 currently pending
Career history
455
Total Applications
across all art units

Statute-Specific Performance

§101
21.4%
-18.6% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 423 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-9, 11-17, 19-20, and 22-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental process without significantly more. Independent claim 1 recites: “A method of importing data to a database that is implemented by an electronic device, the method comprising: acquiring incoming data from a data source according to a database config file; wherein the incoming data is original data directly acquired from the data source; calculating and processing the incoming data according to the database config file to obtain computational data; wherein the computational data is obtained by integrating and calculating the incoming data; and writing the incoming data and the computational data into the database, wherein the writing the incoming data and the computational data into the database comprises: writing the incoming data and the computational data into a staging file; checking whether a data format of the incoming data and a data format of the computational data in the staging file is consistent with a preset data format in the database config file, wherein the checking comprises: checking whether a data structure of the incoming data and a data structure of the computational data in the staging file is consistent with a preset data structure in the database config file, and checking a data integrity of the incoming data and a data integrity of the computational data; writing, in response to the data format of the incoming data and the data format of the computational data being consistent with the preset data format, the incoming data and the computational data from the staging file into the database; and modifying, in response to the data format of the incoming data and the data format of the computational data being inconsistent with the preset data format, the data format of the incoming data and the data format of the computational data to be consistent with the preset data format, so that data formats of all data in the staging file meets a requirement of the database config file, and collectively writing all the data in the staging file into the database after the modifying.” Independent claims 9 and 17 recite similar subject matter. The claims contain mental process (data analysis) steps of calculating and processing incoming data, multiple checking steps, and modifying a data format. A human being equipped with a generic computer is capable of performing these steps. The additional elements of the claims include acquiring incoming data from a data source, writing the incoming computational data into the database, writing … the incoming data and the computational data from the staging file into the database, and collectively writing all the data in the staging file into the database after the modifying. Claim 1 contains an additional element of “an electronic device.” Claim 9 contains additional elements of a processor and a memory and claim 17 contains a “non-transitory computer-readable storage medium.” This judicial exception is not integrated into a practical application because the claimed additional elements do not appear to improve the processing of a computer, require the use of a specific machine, effect a transformation or reduction of a particular article to a different state or thing, or provide a technological solution to a technological problem. The step of “acquiring incoming data from a data source” appears to be a data gathering step, and is thus mere pre-solution insignificant activity (see MPEP 2106.05(g). The steps of “writing the incoming computational data into the database,” “writing … the incoming data and the computational data from the staging file into the database,” and “collectively writing all the data in the staging file into the database after the modifying” appear to be merely storing data, which is insignificant extra-solution activity and is well-known (see MPEP 2106.05(d)(II) and MPEP 2106.05(g)). The electronic device, processor, memory, and “non-transitory computer-readable storage medium” are computer hardware elements recited at a high level of generality. They appear to be generic computing hardware elements. The recitation of generic hardware is little more than using a computer to perform an abstract idea, see MPEP 2106.05(f)(2). It is noted that none of the additional elements appear to improve the processing of a computer, require the use of a specific machine, effect a transformation or reduction of a particular article to a different state or thing, or provide a technological solution to a technological problem. As such, none of the additional elements appear to integrate the judicial exception into a practical application. None of the additional elements are sufficient to amount to significantly more than the judicial exception, in part or in whole. The step of “acquiring incoming data” is merely extra-solution activity data gathering and is well understood, routine, and conventional (see MPEP 2106.05(g)). “Writing the incoming computational data into the database,” “writing … the incoming data and the computational data from the staging file into the database,” and “collectively writing all the data in the staging file into the database after the modifying” are nothing more than storing data in a memory, which is recognized as well-understood, routine, and conventional (see MPEP 2106.05(d)(II)). The recitation of generic hardware of the electronic device, processor, memory, and “non-transitory computer-readable storage medium” represents little more than using a computer to perform an abstract idea, see MPEP 2106.05(f)(2). None of the additional elements, in part or in whole, appear to improve the processing of a computer, require the use of a particular machine, effect a transformation or reduction of a particular article to a different state or thing, or add a specific limitation other than what is well understood, routine, or conventional. As such, none of the additional elements appears to be, in part or in whole, significantly more than the judicial exception. Dependent claims 3-8, 11-16, 19-20, and 22-23 are merely directed towards additional limitations that further define data types or further describe analyses that will occur. It is noted that the claimed data definitions and data analysis and extraction steps do not appear to include additional elements that incorporate the claimed subject matter into a practical application. The dependent claims also do not include additional elements that, in part or in whole, appear to be significantly more than the abstract idea. At best, the additional elements present in the dependent claims, such as writing data steps and acquiring data steps, appear to be merely extra-solution activity that are well understood, routine, and conventional. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-9, 14-17, 20, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Kodavati et al. (US Pre-Grant Publication 2023/0144349) in view of Balta et al. (US Pre-Grant Publication 2021/0383006), and further in view of Best et al. (US Patent 7,865,461). As to claim 1, Kodavati teaches a method of importing data to a database that is implemented by an electronic device, the method comprising: acquiring incoming data from a data source according to a database config file (see Kodavati paragraphs [0019], [0022], and [0031]. Kodavati describes a system that uses “transformation configuration parameters” to identify and extract data from a source (see paragraph [0022]). It is noted that Kodavati indicates that the “transformation configuration parameters” may be stored in a file (see paragraphs [0019] and [0031]); wherein the incoming data is original data directly acquired from the data source (see Kodavati paragraph [0032]. The data is directly acquired from the source data tables); calculating and processing the incoming data according to the database config file to obtain computational data (see Kodavati paragraph [0033]-[0034]. The incoming data undergoes various data analyses, aggregation, and summarizations); wherein the computational data is obtained by integrating and calculating the incoming data (see Kodavati paragraph [0033]-[0034]. The various data analyses, aggregation, and summarizations are “data integrations” and “calculations” to incoming data that result in computed data stored in an intermediate staging area); and writing the incoming data and the computational data into the database (see Kodavati paragraph [0035]. The data stored in the intermediate staging area may be transformed into a format associated with a target computing system), wherein the writing the incoming data and the computational data into the database comprises: writing the incoming data and the computational data into a staging file (see Kodavati paragraph [0033]-[0034]); … modifying … the data format of the incoming data and the data format of the computational data to be consistent with the preset data format (see Kodavati paragraphs [0034]-[0035]), so that data formats of all data in the staging file meets a requirement of the database config file (see Kodavati paragraphs [0034]-[0035]), and collectively writing all the data in the staging file into the database after the modifying (see Kodavati paragraphs [0034]-[0035]). Kodavati does not teach: checking whether a data format of the incoming data and a data format of the computational data in the staging file is consistent with a preset data format in the database config file, wherein the checking comprises: checking whether a data structure of the incoming data and a data structure of the computational data in the staging file is consistent with a preset data structure in the database config file, and checking a data integrity of the incoming data and a data integrity of the computational data; writing, in response to the data format of the incoming data and the data format of the computational data being consistent with the preset data format, the incoming data and the computational data from the staging file into the database; and modifying, in response to the data format of the incoming data and the data format of the computational data being inconsistent with the preset data format, the data format of the incoming data and the data format of the computational data to be consistent with the preset data format. Balta teaches: checking whether a data format of the incoming data and a data format of the computational data in the staging file is consistent with a preset data format in the database config file (see Balta paragraphs [0056]-[0058] for checking whether incoming data and computed data is consistent with a preset data format at a destination. It is noted that Kodavati paragraphs [0034]-[0035] teach the use of a staging area and a database config file), wherein the checking comprises: checking whether a data structure of the incoming data and a data structure of the computational data in the staging file is consistent with a preset data structure in the database config file (see Balta paragraphs [0056]-[0058]), and … writing, in response to the data format of the incoming data and the data format of the computational data being consistent with the preset data format, the incoming data and the computational data from the staging file into the database (see Balta paragraphs [0056]-[0058]); and modifying, in response to the data format of the incoming data and the data format of the computational data being inconsistent with the preset data format, the data format of the incoming data and the data format of the computational data to be consistent with the preset data format (see Balta paragraphs [0056]-[0058]. Balta teaches to identify a required format of the data. If the required format of the data is the same as the source and current format of the data, the data is not modified. If the required format and the present format are not the same, the data is modified), so that data formats of all data in the staging file meets a requirement of the database config file (see Balta paragraphs [0056]-[0058]. Data is modified before being stored in the target database), and collectively writing all the data in the staging file into the database after the modifying (see Balta paragraphs [0056]-[0058]. The data is stored in the target database after being modified, if modification was needed). It would have been obvious to one of ordinary skill in the art before the earliest filing date of the invention to have modified Kodavati by the teachings of Balta because Balta provides Kodavati a benefit of determining whether reformatting is necessary or not with a check of each of multiple targets, which will improve efficiency. Best teaches: checking a data integrity of the incoming data and a data integrity of the computational data (see Best 5:62-6:6. Data located in a staging area may undergo an integrity check process that checks for data formatting and proper standards according to a reference database); It would have been obvious to one of ordinary skill in the art before the earliest filing date of the invention to have modified Kodavati by the teachings of Best because Best provides Kodavati a benefit of validating incoming data in an ETL system that will ensure data is formatted correctly for a destination according to selected reference standards. As to claim 6, Kodavati as modified teaches the method according to claim 1, wherein writing the incoming data and the computational data from the staging file into the database comprises: processing the incoming data and the computational data into a plurality of data slices (see Kodavati paragraph [0035]. Data is processed in subsets); and writing the plurality of data slices in batches from the staging file into the database (see Kodavati paragraph [0035]. Subsets of data are written to the database after processing). As to claim 7, Kodavati as modified teaches the method according to claim 1, further comprising: reading product side data regularly (see Kodavati paragraph [0037]); determining whether a difference exists between the product side data and the database config file (see Kodavati paragraph [0037]. The system of Kodavati checks for new or updated data in view of the configuration file. This is determining whether a difference exists by determining values updated after a preset time); writing, in response to the difference existing between the product side data and the database config file, difference data into the staging file (see Kodavati paragraph [0037]); returning, in response to the difference not existing between the product side data and the database config file, to read the product side data regularly (see Kodavati paragraph [0037]); checking, after writing the difference data into the staging file, whether a data format of the difference data is consistent with the preset data format (see Balta paragraphs [0056]-[0058]); writing, in response to the data format of the difference data being consistent with the preset data format, the difference data from the staging file into the database (see Balta paragraphs [0056]-[0058]); and modifying, in response to the data format of the difference data being inconsistent with the preset data format, the data format of the difference data to be consistent with the preset data format, and writing the modified difference data from the staging file into the database (see Kodavati paragraphs [0034]-[0035]. Also see Balta paragraphs [0056]-[0058])). As to claim 8, Kodavati teaches the method according to claim 1, wherein the database config file comprises at least one of: a road type; an indicator name; a data type (see Kodavati paragraphs [0033] and [0024]); an indicator acquisition granularity; a data request scope (see Kodavati paragraphs [0033] and [0024]); an aggregation request parameter; a data format (see Kodavati paragraphs [0033] and [0024]); a scope request address; an aggregation request address; a time granularity; a selected index (see Kodavati paragraphs [0033] and [0024]); or a selected time period (see Kodavati paragraph [0037]). As to claims 9, 17, and 20 see the rejection of claim 1. As to claim 14, see the rejection of claim 6. As to claim 15, see the rejection of claim 7. As to claim 16, see the rejection of claim 8. As to claim 22, Kodavati as modified by Best teaches method of claim 1, wherein checking the data format further comprises executing an automated integrity-validation engine that detects missing fields, inconsistent schemas, or null-value violations prior to committing data to the database (see Best 5:62-6:6. Inconsistent data standards, or schemas, are checked for during the integrity checking process). Claims 3, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kodavati et al. (US Pre-Grant Publication 2023/0144349) in view of Balta et al. (US Pre-Grant Publication 2021/0383006), and further in view of in view of Best et al. (US Patent 7,865,461), and further in view of Dill et al. (US Pre-Grant Publication 2004/0215656). As to claim 3, Kodavati teaches the method according to claim 1, wherein the acquiring incoming data from a data source according to a database config file comprises: acquiring a data runtime period, wherein the data runtime period comprises a user input [time] or a preset [time] (see Kodavati paragraph [0037] for a scheduling parameter); and acquiring, according to the database config file, the original data corresponding to the data runtime period from the data source as the incoming data (see Kodavati paragraph [0037] for a scheduling parameter). Kodavati does not explicitly teach a user input date or a preset date. Dill teaches acquiring a data runtime period, wherein the data runtime period comprises a user input date or a preset date (see Dill paragraphs [0062]-[0063]); and acquiring, according to the database config file, the original data corresponding to the data runtime period from the data source as the incoming data (see Dill paragraphs [0062]-[0063]). It would have been obvious to one of ordinary skill in the art before the earliest filing date of the invention to have modified Kodavati by the teachings of Dill because Dill provides Kodavati additional parameters to consider when extracting data, which will improve the ability of a user to customize an ETL process with Kodavati. As to claims 11 and 19, see the rejection of claim 3. Claims 4-5 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Kodavati et al. (US Pre-Grant Publication 2023/0144349) in view of Balta et al. (US Pre-Grant Publication 2021/0383006), and in view of Best et al. (US Patent 7,865,461), and further in view of Beesley et al. (US Patent 6,704,645). As to claim 4, Kodavati teaches the method according to claim 1, wherein the acquiring incoming data from a data source according to a database config file further comprises: acquiring a first target parameter corresponding to the incoming data from the database config file (see Kodavati paragraphs [0033]-[0035]); … acquiring, according to the identity document and the first target parameter, corresponding original data from the data source as the incoming data (see Kodavati paragraphs [0033]-[0035]). Kodavati does not teach: acquiring an identity document corresponding to each road in the incoming data from a road network file; wherein the incoming data is traffic data, and the road network file comprises the identity document corresponding to each road in the traffic data and a hierarchical relationship between roads; and Beesley teaches: acquiring an identity document corresponding to each road in the incoming data from a road network file (see Beesley 4:51-5:5. Beesley shows a data structure that may store road network data); wherein the incoming data is traffic data (see Beesley 7:45-8:7. Traffic data is considered along with road data), and the road network file comprises the identity document corresponding to each road in the traffic data and a hierarchical relationship between roads (see Beesley 4:51-5:5. Beesley shows that the data structure may maintain a hierarchy of roads); It would have been obvious to one of ordinary skill in the art before the earliest filing date of the invention to have modified Kodavati by the teachings of Beesley because Beesley provides Kodavati additional data to consider and parse that will improve efficiency analysis for certain uses, particularly traffic uses, on a target system of Kodavati. As to claim 5, Kodavati teaches the method according to claim 4, wherein the calculating and processing the incoming data according to the database config file to obtain computational data comprises: acquiring a second target parameter corresponding to the computational data from the database config file (see Beesley 7:45-8:7. A user may provide input for processing data. It is noted that Kodavati teaches wherein such processing instructions are in a database config file, paragraphs [0019] and [0031]). ; acquiring an identity document and a hierarchical relationship corresponding to each road in the computational data from the road network file (see Beesley 4:51-5:5); and calculating and processing the incoming data according to the identity document, the hierarchical relationship, and the second target parameter, so as to obtain corresponding second indicator data as the computational data (see Beesley 7:45-8:7); wherein the calculating and processing comprises at least one of: extracting the incoming data; integrating the incoming data (see Beesley 7:45-8:7); or ranking the incoming data. As to claims 12 and 13, see the rejections of claim 4 and 5. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Kodavati et al. (US Pre-Grant Publication 2023/0144349) in view of Balta et al. (US Pre-Grant Publication 2021/0383006), in view of Best et al. (US Patent 7,865,461), and further in view of Katz et al. (US Pre-Grant Publication 2002/0178077). As to claim 21, Kodavati as modified teaches the method of claim 1. Kodavati as modified does not teach wherein writing the incoming data and the computational data into the database comprises performing a single batched commit operation that transfers data from the staging area to the database, thereby replacing multiple direct write operations with a single batched commit operation. Katz teaches wherein writing the incoming data and the computational data into the database comprises performing a single batched commit operation that transfers data from the staging area to the database, thereby replacing multiple direct write operations with a single batched commit operation (see paragraph [0224]. In Katz, loading can be done using a single sequence). It would have been obvious to one of ordinary skill in the art before the earliest filing date of the invention to have modified Kodavati by the teachings of Katz because Katz provides Kodavati the ability to incorporate data from many heterogeneous sources to load into a database and giving a user additional options when choosing how to load that data. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Kodavati et al. (US Pre-Grant Publication 2023/0144349) in view of Balta et al. (US Pre-Grant Publication 2021/0383006), in view of Best et al. (US Patent 7,865,461), and further in view of Scott et al. (US Pre-Grant Publication 2020/0125566). As to claim 23, Kodavati as modified teaches method of claim 1. Kodavati does not clearly teach wherein the staging file is stored in a predefined structured format supported by the staging area, wherein the predefined structure format is a CSV format. Scott wherein the staging file is stored in a predefined structured format supported by the staging area, wherein the predefined structure format is a CSV format (see paragraphs [0038]-[0039]. Scott shows storing data pulled from a source into a CSV format). It would have been obvious to one of ordinary skill in the art before the earliest filing date of the invention to have modified Kodavati by the teachings of Scott because Scott provides Kodavati the ability to incorporate data from many heterogeneous sources such that they can be presented in a unified view to a user, which will improve a user’s ability to understand the data inputs and outputs (see Scott paragraph [0022]). Response to Arguments Applicant's arguments filed 28 November 2025 have been fully considered but they are not persuasive. Response to Arguments made under 35 USC 101 Applicant argues that “Claim 1 recites a specific, non-generic data-handling architecture, including a staging area, a staging file, automated format and structure checking, data-integrity checking, and conditional correction prior to a single batched write to the database. These claim features, supported in the specification, constitute a technical solution to a technical problem in computer/database operation-namely unsafe, inefficient, and unreliable direct write operations that can corrupt databases and degrade system performance (see, e.g., paragraphs [0037] and [0040] of the present specification).” In response to this argument, it is noted that applicant’s arguments and paragraph [0037] of the specification both emphasize that the claimed solution and improvement relies upon a single batched write to the database. It is noted that this is not present in the independent claims – neither the “writing … the incoming data and the computational data from the staging file into the database” nor the “modifying … the data format of the incoming data and the data format of the computational data to be consistent with the preset data format … and collectively writing all the data in the staging file into the database after the modifying” require such a single batched write. It is noted that both must do so in order for Applicant’s arguments to be relevant to the claims. Applicant is reminded that unclaimed features from the specification do not receive patentable weight. It is also noted that claim 21, which does claim this feature, has not been rejected under 35 USC 101 in view of this argument. Applicant argues that “However, amended claim 1 cannot be performed mentally, because it requires: "writing ... into a staging file in a staging area," "checking whether a data format ... is consistent with a preset data format in the database config file," "checking whether a data structure ... is consistent with a preset data structure in the database config file," "checking a data integrity of the incoming data and a data integrity of the computational data," "modifying ... in response to" format inconsistency, and "collectively writing all the data in the staging file into the database." Applicant continues, arguing that “These structures and operations involve, e.g., file-system manipulation, I/O sequencing, data-structure manipulation, and controlled commit behavior-mechanisms that cannot be practically or theoretically performed by the human mind. They constitute "a specific implementation of a solution to a problem in the software arts," similar to Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), where database-specific architectural improvements were found not abstract. Thus, these features take Claim 1 as amended out of the mental processing group.” In response to this argument, it is noted that of the listed features, only the “writing” steps are additional elements beyond the mental process. The remaining steps are merely data definition steps or data analysis steps and are thus mental process steps. As noted in the rejection above, the writing steps appear to be generic data storing steps that do not integrate a practical application into the mental process of the claims or provide, in part or as a whole, significantly more than a mental process. Applicant argues that “The claimed staging-area mechanism is a technical improvement in computer database functionality (MPEP 2106.05(a), (c)), consistent with USPTO Example 40.” Applicant elaborates, arguing that “Amended claim 1 provides an analogous specific technological improvement. Here, the invention introduces a specialized staging layer that: - Buffers data in a staging file, - Checks whether a data format ... is consistent with a preset data format - Checks whether a data structure ... is consistent with a preset data structure, - Checks a data integrity of both incoming and computed data, - Modifies the data in response to inconsistency, and - Collectively writes all the data in the staging file into the database after the modifying. This produces technical effects including, for example: - Reducing unsafe, uncontrolled direct writes to the database, - Ensuring that only consistent, validated, and integrity-verified data reaches the persistent store, - Reducing the number of database I/O transactions from many (unpredictable) to one controlled commit, and - Preventing partial or malformed writes (see, e.g., paragraphs [0037]-[0038] of the specification).” Applicant then includes a table comparing multiple features of Example 40 with the amended claimed, and explains how “Just as Example 40 improved the efficiency and performance of network monitoring technology, amended claim 1 improves the functioning of database systems by introducing a controlled, validated commit process that ensures data consistency and integrity and reduces unsafe I/O behavior. Thus, the claim integrates any alleged abstract idea into a practical, technological application, satisfying Step 2A Prong 2.” As noted in the preceding remarks, the only additional elements in the independent claims include generic hardware elements, an acquiring data step, and various writing steps. However, the writing steps are claimed as generic data storage steps and do not integrate the mental process into a practical application nor provide, in part or as a whole, significantly more than the menta process. The writing steps of the independent claims do not claim any feature of performing both writing steps using a single batched commit operation. As such, the current independent claims do not realize the benefits nor provide a technical solution as provided in paragraph [0037] of the specification. Applicant continues, arguing that “Claim 1 recites: "a staging area," "a staging file," "a database config file," "checking whether a data format ... is consistent with a preset data format," "modifying ... in response to" inconsistency, and "collectively writing all the data ... into the database."14 These are not generic components performing generic tasks. They represent a specific, non-conventional pipeline architecture designed to address concrete technological problems: unsafe database writes, schema inconsistency, and corrupted persistent storage (see, e.g., paragraphs [0037]-[0038] of the specification). This aligns with USPTO Example 4 (GPS), where a claim involving mathematical operations became eligible because it improved the functioning of GPS receivers by introducing a specific collaborative architecture. Likewise, the components here operate "in concert" to solve the database-centric technological problem.” As noted in the previous responses, the writing steps of the independent claims lack the feature of performing a single batch write that Applicant argues is an improvement. As such, the writing steps are merely generic storage steps and do not integrate the mental process into a practical application nor provide significantly more than the mental process. Applicant argues that “The problem solved by the claimed invention-unsafe direct writes, schema inconsistency, and corruption of persistent data stores-is a problem rooted in computer technology, not a business or mental process problem. This is consistent with: - DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245 (Fed. Cir. 2014) (eligibility for solving a problem inherent in computer networks), -Ancora Techs., Inc. v. HTC Am., Inc., 908 F.3d 1343 (Fed. Cir. 2018) (eligibility for improving computer security by modifying computer-architecture behavior), and - USPTO Example 41 (Cryptographic Communications) (algorithmic transformation eligible when used in a specific data-handling architecture to improve security).” Applicant continues, arguing that “Taken together, the claim elements amount to far more than data validation performed on a generic computer. The claimed method implements a specific, structured data-processing architecture, including a staging area, a staging file, automated format and integrity verification, conditional data-format correction, and a single controlled batched commit, that operates as an integrated pipeline to improve the integrity, reliability, and safety of database write operations. This is a computer-centric solution to a computer-centric problem, analogous to the eligible improvements in USPTO Examples 40 and 41, where conventional components were arranged in a non-conventional architecture that provided a technological benefit. Here, the claimed configuration materially improves the functioning of the computer system itself by preventing unsafe partial writes, reducing unnecessary I/O interactions, and ensuring that only validated and consistent data is committed to persistent storage. Accordingly, the claims are integrated into a practical application and are not directed to a judicial exception.” As noted above, the independent claims do not claim “a single controlled batched commit” for both writing steps. Because of this, Applicant’s argument is unpersuasive. Applicant is reminded that unclaimed features do not receive patentable weight until claimed. Applicant argues that “Claim 1 expressly requires a sequence of interdependent, machine-executed operations, including: "writing the incoming data and the computational data into a staging file," "checking whether a data format ... is consistent with a preset data format," "checking whether a data structure ... is consistent with a preset data structure," "checking a data integrity ...," "modifying ... in response to" detecting inconsistency, and "collectively writing all the data ... into the database." This is not a generic "validate data" step. It is a coordinated, multi-stage verification and correction mechanism that operates within a specialized staging architecture and must complete successfully before any data is committed.” Applicant continues, arguing that “The USPTO has expressly recognized that such conditional, event-driven processing architectures constitute non-conventional activity. In Example 40 (Adaptive Monitoring of Network Traffic), the Office held that a system that changes its monitoring behavior only upon detection of a threshold condition is not well-understood, routine, or conventional. The same reasoning applies here: claim 1 requires conditional modification and conditional commit behavior that departs from routine, always-write patterns of conventional database systems. Thus, the claim recites far more than standard data manipulation; it recites a non-conventional integrity-validation pipeline that materially changes how data is prepared for storage.” It is noted that the claimed steps of “checking” steps are data analyses that merely output whether data is consistent with a preset data format or not. A human being equipped with pen and paper or a generic machine is capable of such “checking.” As such, these are mental process steps. There are no claimed details regarding how the “staging area” is specialized beyond being the location where the data is stored during the data analysis steps. Applicant is reminded that even assuming an improvement resulting from a data analysis exists, an improvement to a mental process remains a mental process. Under 35 USC 101, an additional element beyond the mental process is required that integrates the mental process into a practical application or that provides significantly more than the mental process in part or as a whole. The writing steps of the claims represent such an “additional element” that may possibly fill those conditions. However, as noted above, the writing steps of the independent claims appear to be merely routine generic storage steps. The benefits of paragraph [0037] described in the specification require both writing steps to perform a single batched commit operation that transfers data from the staging area to the database. This is not claimed in the independent claims. Because of this, the claims do not have an additional element that provides significantly more than the abstract idea, in part or as a whole. Applicant argues that “The specification confirms that traditional systems perform repeated direct writes, which are unsafe and create the risk of partial, corrupted, or inconsistent entries. As described (e.g., paragraph [0037]): - multiple direct writes are unsafe, - the staging area minimizes write operations, and - only a single, controlled commit occurs after all data is validated and corrected. This "single batched commit" mechanism is not a conventional database operation. It is a reengineered write-control architecture that reduces I/O risk, prevents mid-task corruption, and ensures that all committed data conforms to the configuration file requirements. The Federal Circuit consistently treats such improvements to computer operation as providing "significantly more" (e.g., Enfish, Ancora). The claimed approach therefore changes how the database system functions. It does not merely add an abstract idea to a generic computer; it modifies the underlying behavior of the computer system to provide safer and more reliable persistent storage. For these reasons, and those discussed above, the additional elements of Claim 1, individually and in ordered combination, amount to "significantly more" than any alleged abstract idea under Step 2B. Claim 1 and its dependent claims are therefore patent-eligible.” In response to this argument, as already addressed above, “the single batched commit” features is not claimed in the independent claims. Applicant is reminded that unclaimed features do not receive patentable weight until claimed. Applicant recites the language of claims 9 and 17, then concludes with “Claims 9 and 17 recite the same staging-file architecture, format-verification procedures, conditional modification logic, and controlled batched-commit operations discussed above with respect to Claim 1. Because these claims incorporate the same non-conventional technical features, they likewise provide a specific improvement to computer and database functionality. Thus, Claims 9 and 17 and their dependent claims are patent eligible for the same reasons.” In response to this argument, as already addressed above, “the single batched commit” features is not claimed in the independent claims. Applicant is reminded that unclaimed features do not receive patentable weight until claimed. Response to Arguments made under 35 USC 103 Applicant argues that Balta does not disclose “checking whether the data format of incoming data in a staging file is consistent with a preset data format in a database config file, checking whether the data format of computational data in a staging file is consistent with a preset data format in a database config file, checking whether the data structure of incoming data or computational data is consistent with a preset data structure in a database config file, or checking data integrity of incoming or computational data.” Applicant concludes by stating that “Balta does not teach any of the claimed checking steps.” In response to this argument, it is noted that Applicant provides no reason as to why the cited portions of Balta or the provided rationale does not teach any of the above steps. As such, Applicant’s assertion is unpersuasive. Examiner notes that Balta does teach the claimed subject matter for the reasons provided in the rejection above. It is noted that Best is relied upon to teach the “checking data integrity” step, as cited above. Applicant argues that “Balta does not teach writing data from a staging file into a database "in response to" the claimed consistency check. Rather, paragraphs [0056]-[0058] of Balta discuss writing data to multiple target storage locations after applying security transformations, not after a consistency verification.” In response to this argument, it is noted that the “security transformations” of Balta are only performed if the data is not already in a proper format. A “consistency check” occurs before the “security transformation,” wherein the “security transformation” is done in response to the “consistency check.” Thus, Balta does teach the claimed limitation to the extent claimed. Applicant asserts that “Balta contains no disclosure of: modifying data in response to inconsistency with a preset data format in a database config file; modifying both incoming data and computational data in a staging file; or bringing the data into conformity with a preset data format defined in a database config file.” Applicant concludes that “Balta's transformations (paragraphs [0056]-[0058]) relate to security policies, not schema-based consistency.” In response to this argument, it is noted that Balta’s transformations only occur after a consistency check of the data format of any inputted data. Data that is not in conformance with a target format is transformed in Balta (see Balta paragraphs [0056]-[0058]). Thus, Balta does teach the limitations to the extent claimed. Applicant is reminded that any unclaimed features regarding Applicant’s consistency checks receive no patentable weight until claimed. Applicant argues that “Balta does not disclose: ensuring that all data in the staging file meet a requirement of a database config file, or collectively writing all the data in the staging file into a database after the above- referenced modifying. Balta performs per-target secure transformations, not a unified staging-file batch operation.” In response to this argument, there is no claimed “unified staging-file batch operation.” Applicant is reminded that unclaimed features from the specification receive no patentable weight until claimed. Applicant argues that “Even if combined, the cited portions of Kodavati and Balta do not teach or suggest: "checking whether a data format ... is consistent with a preset data format in the database config file," "checking whether a data structure ... is consistent with a preset data structure in the database config file," "checking a data integrity ...," "writing, in response to the data format ... being consistent ...," "modifying, in response to the data format ... being inconsistent ...," and "collectively writing all the data in the staging file into the database after the modifying." Because these specific recited features remain absent, obviousness is not established.” In response to this assertion, it is noted that Applicant provided no specific additional reasoning for why the cited prior art fails to teach these limitations. As such, Applicant’s assertion is unpersuasive. The limitations are taught for the reasons provided in the office action above. Applicant’s remaining arguments are directed towards Beesly, in which Applicant states that Beesly does not remedy the deficiencies of Kodavati in regards to the independent claims. In response to this argument, it is noted that Beesly is not relied upon to teach the limitations of the independent claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES D ADAMS whose telephone number is (571)272-3938. The examiner can normally be reached M-F, 9-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at 571-270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES D ADAMS/ Primary Examiner, Art Unit 2152
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §101, §103
Nov 28, 2025
Response Filed
Mar 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602392
SCALABLE METADATA-DRIVEN DATA INGESTION PIPELINE
2y 5m to grant Granted Apr 14, 2026
Patent 12591595
ADAPATIVE SYSTEM FOR PROCESSING DISTRIBUTED DATA FILES AND A METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12572546
METHODS AND SYSTEMS FOR DISTRIBUTED DATA ANALYSIS
2y 5m to grant Granted Mar 10, 2026
Patent 12566778
OPTIMIZING JSON STRUCTURE
2y 5m to grant Granted Mar 03, 2026
Patent 12566706
PROVIDING ROLLING UPDATES OF DISTRIBUTED SYSTEMS WITH A SHARED CACHE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
88%
With Interview (+44.2%)
5y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 423 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month