Prosecution Insights
Last updated: April 19, 2026
Application No. 19/076,392

LARGE DATA TRANSFER AMONG DATABASE SERVERS

Non-Final OA §102§103
Filed
Mar 11, 2025
Examiner
BOWEN, RICHARD L
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
437 granted / 544 resolved
+25.3% vs TC avg
Strong +28% interview lift
Without
With
+27.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
14 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 544 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 11 and 20 objected to because of the following informalities: the last line recites “the shared database server;” however, it appears that this is a typographical error and should read “the [[shared]] shard database server.” Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6, 8, 9, 12-14, 16 and 18 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Rice et al. (U.S. Patent No. 10,067,969 B2, hereinafter referred to as “Rice”). Regarding claim 1, Rice discloses a computer-implemented method comprising: (e.g., abstract, figures 7a-7c, 8a and 9a-9c and col 28 lines 29-35) receiving, from a large data repository database server at a receiving database server, an object reference data for retrieving particular large data; (“The method 800 begins in act 802. In act 804, a TE node (e.g., TE nodes 106a-106c) receives a query from a database client, such as the SQL clients 102.” “The method 800 begins in act 802. In act 804, a TE node (e.g., TE nodes 106a-106c) receives a query from a database client, such as the SQL clients 102.” “The atom to SQL mapping module 208 enables the TE to determine which atoms are affected by a given query based on the database objects referenced in the query. Recall that all database objects can be represented by atoms within the distributed database system 100. Also recall that atoms are linked, and to this end, the TE determines atoms affected by the query based on, in part, a query execution path that traverses these links and uses index atoms, table atoms, record atoms, and other such database objects.”)(e.g., figure 8A and col 29 lines 19-55) based at least in part on the object reference data, determining whether the object reference data references a storage location of the particular large data at the large data repository database server or indicates initiation of data stream of the particular large data from the large data repository database server to receiving database server; (“If all affected atoms are within the atom cache, the TE node returns a result set to the client in act 808 exclusively from the atom cache. As will be appreciated in light of this disclosure, this enables queries to be efficiently serviced without incurring latencies related to disk access, or roundtrips related to requesting atoms from peer nodes. If the query received in act 804 affects any atoms not in the atom cache, the methodology 800 continues to act 810.”)(e.g., figure 8a and col 29 lines 48-55) if it is determined that the object reference data indicates the initiation of data stream of the particular large data from the large data repository database server to the receiving database server, without any additional request to the large data repository database server by the receiving database server: (It is noted that this step, along with the following steps are considered to be contingent limitations, and are not limiting to the claim scope. See MPEP 2111.04(II). It is noted that this only applies to the method claims, and the medium claim requires the feature, because the medium is programmed to require the medium to be capable of performing the features. For these claim limitations to be required, Applicant can positively recite a determining step to require these optional features. For purposes of compact prosecution, the scope of the contingent limitations are being considered.)(“In act 806, the TE determines if the query received in act 804 affects any atoms not presently loaded into its respective atom cache. The atom to SQL mapping module 208 enables the TE to determine which atoms are affected by a given query based on the database objects referenced in the query. Recall that all database objects can be represented by atoms within the distributed database system 100. Also recall that atoms are linked, and to this end, the TE determines atoms affected by the query based on, in part, a query execution path that traverses these links and uses index atoms, table atoms, record atoms, and other such database objects. In a general sense, the TE determines affected objects in a manner that leverages the relational model, or whatever model the database presently implements (e.g., an RDF-base model).”)(e.g., figures 8a and 8b and col 29 lines 48-55 and col 30 lines 1-25) receiving, by the receiving database server, one or more data portions of the particular large data, and (“In act 812, the TE receives one or more atoms requested in act 810. In act 808, the TE performs atom-to-SQL mapping to construct a result set that comports to the requirements of the client (e.g., a SQL-compatible result set), and communicates the constructed result set to the client.”)(e.g., figures 8a and 8b and col 30 lines 20-25) storing the one or more data portions of the particular large data in storage of the receiving database server. (“In addition, it should be appreciated in light of this disclosure that virtually any database node in the transaction tier 107 and/or the persistence tier 109 could be utilized by the TE 106a, as atoms can be requested from any peer node having the requested atoms in a respective atom cache or durable storage, as the case may be. In such cases, retrieved atoms, and those atoms already present in the atom cache of the TE node 106a, can be utilized to service the query and return a result set, similar to act 808 discussed above with regard to FIG. 8a.”)(e.g., figures 8a and 8b and col 30 lines 55-64). Regarding claim 2, Rice discloses the method of claim 1. Rice further discloses further comprising: receiving, at the large data repository database server, a request for the particular large data; (“The atom to SQL mapping module 208 enables the TE to determine which atoms are affected by a given query based on the database objects referenced in the query. Recall that all database objects can be represented by atoms within the distributed database system 100. Also recall that atoms are linked, and to this end, the TE determines atoms affected by the query based on, in part, a query execution path that traverses these links and uses index atoms, table atoms, record atoms, and other such database objects.”)(e.g., col 29 lines 35-43 if it is determined that the receiving database server supports receiving the data stream of the particular large data from the large data repository database server without any additional request to the large data repository database server for the particular large data, generating the object reference data that indicates the initiation of the data stream of the particular large data from the large data repository database server to the receiving database server is performed without any additional request to the large data repository database server by the receiving database server; (It is noted that this step, along with the following steps are considered to be contingent limitations, and are not limiting to the claim scope. See MPEP 2111.04(II). It is noted that this only applies to the method claims, and the medium claim requires the feature, because the medium is programmed to require the medium to be capable of performing the features. For these claim limitations to be required, Applicant can positively recite a determining step to require these optional features. For purposes of compact prosecution, the scope of the contingent limitations are being considered.)(“ In act 810, those atoms that are not available in the atom cache are requested from a most-responsive or otherwise low-latency peer database node. As discussed above, various mappings can be used to identify if affected atoms correspond to a partitioned table, and to also identify a filtered set of nodes that service the storage group for a given table partition. To this end, the filtered list of nodes may be utilized by the TE node to request atoms, as needed.”)(figures 7a, 7b and 8a and col 30 lines 1-8) sending the object reference data to the receiving database server. (“In act 812, the TE receives one or more atoms requested in act 810. In act 808, the TE performs atom-to-SQL mapping to construct a result set that comports to the requirements of the client (e.g., a SQL-compatible result set), and communicates the constructed result set to the client.”)(e.g., figure 8a and col 30 lines 20-25) Regarding claim 3, Rice discloses the method of claim 1. Rice further discloses further comprising: receiving a query at the receiving database server targeting the particular large data stored on the large data repository database server; (“The method 800 begins in act 802. In act 804, a TE node (e.g., TE nodes 106a-106c) receives a query from a database client, such as the SQL clients 102.” “In act 805, the TE node determines a query execution plan that prunes partitions irrelevant to the query. Recall that that during query execution, the optimizer 206 can ignore large portions of a table that may not be relevant to a query. In operation, this means that the TE node can use partition keys within the partitioning policies to determine one or more partition tables affected by a query, and filter out those irrelevant table partitions. For example, if a partition key “territory” exists on a countries table, queries against that table for “region=‘KY’”, can cause the TE to prune out those table partitions unrelated to the “KY” region.”)(e.g., figures 7a, 7b and 8a and col 29 and lines 20-30) requesting the large data repository database server for the object reference data of the particular large data stored on the large data repository database server; (“In act 810, those atoms that are not available in the atom cache are requested from a most-responsive or otherwise low-latency peer database node. As discussed above, various mappings can be used to identify if affected atoms correspond to a partitioned table, and to also identify a filtered set of nodes that service the storage group for a given table partition. To this end, the filtered list of nodes may be utilized by the TE node to request atoms, as needed.”)(e.g., col 30 lines 1-8) in response to the request for the object reference data of the particular large data stored on the large data repository database server, receiving, at the receiving database server, the object reference data for retrieving the particular large data. (“In act 812, the TE receives one or more atoms requested in act 810. In act 808, the TE performs atom-to-SQL mapping to construct a result set that comports to the requirements of the client (e.g., a SQL-compatible result set), and communicates the constructed result set to the client. In act 814, the methodology 800 ends.”)(e.g., col 30 lines 20-25). Regarding claim 4, Rice discloses the method of claim 1. Rice further discloses further comprising: continuing receiving other portions of the particular large data until all the data portions of the particular large data are received; (“In an embodiment, the assignment of a storage group to an SM node causes the distributed database system to synchronize database objects of that storage group to the SM node. That is, database objects associated with a particular table partition serviced by a storage group get stored in the SM node's durable storage. Note that a storage group can service multiple table partitions, and thus, an assignment of a storage group to an SM node causes such synchronization for each table partition serviced by that storage group. Further note, the distributed database system continues to accept read and write operations against a storage group during synchronization. For example, a previously synchronized SM node can continue to service requests until a new SM node is fully synchronized against the storage group. In addition, a partially-synchronized SM node can service query requests against what data is available. Write operations against a partially-synchronized SM node occur in-memory, as if the SM node is fully synchronized, with those changes being persisted to durable storage. This means that each SM maintains a consistent copy of each table partition during synchronization, even as additional database write operations occur.”)(e.g., figures 7b, 8a and 9a and col 4 lines 15-36 and col 9 lines 19-23 and col 13 lines 10-13). storing the particular large data in temporary storage of the receiving database server. (“To this end, the optimizer 206 can utilize indexes, clusters, table relationships, and table partitioning policies configured to avoid expensive full-table scans by using portions of the database within cache memory when possible.” “Continuing with FIG. 2a, the TE architecture 200 includes an atom cache 210. As discussed above with regard to FIG. 1, the atom cache 210 is part of the DDC implemented within the distributed database system 100. To this end, and in accordance with an embodiment of the present disclosure, the atom cache 210 hosts a private memory space in RAM accessible by a given TE.” “In some cases, atom requests can be serviced by returning requested atoms from the atom cache of an SM. However, and in accordance with an embodiment, a requested atom may not be available in a given SM atom cache.”)(e.g., figures 7b, and 8a and col 9 lines 19-23 and col 13 lines 10-13). Regarding claim 6, Rice discloses the method of claim 1. Rice further discloses further comprising: if it is determined that the object reference data references the storage location of the particular large data at the large data repository database server: (It is noted that this step, along with the following steps are considered to be contingent limitations, and are not limiting to the claim scope. See MPEP 2111.04(II). It is noted that this only applies to the method claims, and the medium claim requires the feature, because the medium is programmed to require the medium to be capable of performing the features. For these claim limitations to be required, Applicant can positively recite a determining step to require these optional features. For purposes of compact prosecution, the scope of the contingent limitations are being considered.) (“In an embodiment, each TE is responsible for mapping SQL content to corresponding atoms. As generally referred to herein, SQL content comprises database objects such as, for example, tables, indexes and records that may be represented within atoms. In this embodiment, a catalog may be utilized to locate the atoms which are used to perform a given transaction within the distributed database system 100. Likewise, the optimizer 206 can also utilize such mapping to determine atoms that may be available in the atom cache 210.” “So, initially the TE utilizes the SQL parser 204 to determine what tables are affected by the received transaction. In an embodiment, the TE utilizes a catalog to locate each corresponding table atom. Recall that atoms are linked, so table atoms can also link to schema atoms, index atoms, record atoms and data atoms, just to name a few”)(e.g., col 10 lines 8-18 and col 25 lines 7-14) generating a request, by the receiving database server to the large data repository database server, for the particular large data, wherein the request includes the object reference data; (“In act 708, the TE updates those atoms affected by the transaction received in act 704. As discussed above, the TE node can retrieve atoms from the atom cache 210 of peer nodes (e.g., TE nodes, SM nodes). Where a miss occurs, atoms are retrieved from durable storage of an SM node to satisfy a transaction. In any event, updates to atoms occur causing, for example, new atoms to be created or existing atoms to be updated. For instance, the TE performs data manipulations (e.g., inserts, updates, deletes) specified in the received transaction. As discussed above, these data manipulations can comprise DML, or an equivalent thereof, that causes atoms to be updated in a manner that alters the database objects represented by those atoms.”)(e.g., col 25 lines 19-31) sending, by the receiving database server, the request to the large data repository database server; (“In act 710, the TE identifies table partitions affected by the transaction using table partitioning policies. Table partitioning policies can include partitioning criteria that ensures records having particular column values end up in an appropriate table partition. Thus, the TE can identify those affected table partitions by comparing the DDL within the transaction to the criteria within the partitioning policies. Identifying the affected table partitions also enables the TE to determine a storage group for that table partition based on a symbolic storage group identifier that corresponds to each partitioning criteria within table partitioning policies. For example, consider the example DDL 601 of FIG. 6b. If a transaction seeks to insert a record with “WA” in the territory column, the TE can identify a table partition “pnw” and a storage group “SNW.”” “In some cases, tables may be unpartitioned such they comprise one table.”)(e.g., col 25 lines 38-54) in response to the request to the large data repository database server, receiving, at the receiving database server, the data stream of the particular large data from the large data repository database server. (“In act 716, atom updates may be committed to a global audit trail. In an embodiment, a global audit trail is stored within the durable storage of SM nodes within the distributed database system 100 and is thus accessible by peer nodes. In some cases, the global audit trail is a database table that includes records that log, for example, a transaction identifier, tables affected by the transaction, a table partitioning policy, a partition key value, and a storage group. In an embodiment, the global audit trail provides an audit log that enables administrators to determine compliance with the partitioning policies.”)(e.g., col 26 lines 30-41). Regarding claim 8, Rice discloses the method of claim 1. Rice further discloses wherein the object reference data includes character set information of the particular large data, the method further comprising: (“In act 805, the TE node determines a query execution plan that prunes partitions irrelevant to the query. Recall that that during query execution, the optimizer 206 can ignore large portions of a table that may not be relevant to a query. In operation, this means that the TE node can use partition keys within the partitioning policies to determine one or more partition tables affected by a query, and filter out those irrelevant table partitions. For example, if a partition key “territory” exists on a countries table, queries against that table for “region=‘KY’”, can cause the TE to prune out those table partitions unrelated to the “KY” region.”)(e.g., figure 8a and col 29 lines 22-32) based, at least in part, on the character set information of the particular large data as indicated by the object reference data, determining that an original character set of the particular large data is different from a configured character set defined for the particular large data on the receiving database server; (“For example, if a partition key “territory” exists on a countries table, queries against that table for “region=‘KY’”, can cause the TE to prune out those table partitions unrelated to the “KY” region.”)(e,g., col 29 lines 29-32) converting the particular large data from the original character set to the configured character set. (“Although TE nodes are described herein as comprising SQL-specific modules 202-208, such modules can be understood as plug-and-play translation layers that can be replaced with other non-SQL modules having a different dialect or programming language. As will be appreciated in light of this disclosure, ACID properties are enforced at the atom-level, which enables the distributed database system to execute other non-SQL type concurrent data manipulations while still providing ACID properties.” “In an embodiment, this translation from atom to a SQL-compatible result set can also be performed by the SQL mapping module 208.”)(e,g,, col 10 lines 19-27 and col 29 lines 64-67). Regarding claim 9, Rice discloses the method of claim 1. Rice further discloses further comprising: receiving a query, the query indicating an operator to perform an operation on original large data that includes the particular large data of the large data repository database server; (“Further note, the distributed database system continues to accept read and write operations against a storage group during synchronization. For example, a previously synchronized SM node can continue to service requests until a new SM node is fully synchronized against the storage group. In addition, a partially-synchronized SM node can service query requests against what data is available.”)(e.g., col 4 lines 24-30) causing the large data repository database server to perform the operation on the original large data, thereby generating a result of the particular large data that is lesser in size than the original large data. (“In addition, a partially-synchronized SM node can service query requests against what data is available. Write operations against a partially-synchronized SM node occur in-memory, as if the SM node is fully synchronized, with those changes being persisted to durable storage. This means that each SM maintains a consistent copy of each table partition during synchronization, even as additional database write operations occur. Unassigning a storage group from an SM node causes that SM node to remove database objects associated with one or more table partitions serviced by the storage group.”)(e.g., col 4 lines 4-14 and 28-39). Regarding claim 12, Rice discloses one or more non-transitory computer-readable media storing a set of instructions, wherein the set of instructions includes instructions, which when executed by one or more hardware processors, cause: (e.g., col 7 lines 28-35) receiving, from a large data repository database server at a receiving database server, an object reference data for retrieving particular large data; (“The method 800 begins in act 802. In act 804, a TE node (e.g., TE nodes 106a-106c) receives a query from a database client, such as the SQL clients 102.” “The method 800 begins in act 802. In act 804, a TE node (e.g., TE nodes 106a-106c) receives a query from a database client, such as the SQL clients 102.”)(e.g., figure 8A and col 29 lines 19-55) based at least in part on the object reference data, determining whether the object reference data references a storage location of the particular large data at the large data repository database server or indicates initiation of data stream of the particular large data from the large data repository database server to receiving database server; (“If all affected atoms are within the atom cache, the TE node returns a result set to the client in act 808 exclusively from the atom cache. As will be appreciated in light of this disclosure, this enables queries to be efficiently serviced without incurring latencies related to disk access, or roundtrips related to requesting atoms from peer nodes. If the query received in act 804 affects any atoms not in the atom cache, the methodology 800 continues to act 810.”)(e.g., figure 8a and col 29 lines 48-55) if it is determined that the object reference data indicates the initiation of data stream of the particular large data from the large data repository database server to the receiving database server, without any additional request to the large data repository database server by the receiving database server: (“In act 806, the TE determines if the query received in act 804 affects any atoms not presently loaded into its respective atom cache. The atom to SQL mapping module 208 enables the TE to determine which atoms are affected by a given query based on the database objects referenced in the query. Recall that all database objects can be represented by atoms within the distributed database system 100. Also recall that atoms are linked, and to this end, the TE determines atoms affected by the query based on, in part, a query execution path that traverses these links and uses index atoms, table atoms, record atoms, and other such database objects. In a general sense, the TE determines affected objects in a manner that leverages the relational model, or whatever model the database presently implements (e.g., an RDF-base model).”)(e.g., figures 8a and 8b and col 29 lines 48-55 and col 30 lines 1-25) receiving, by the receiving database server, one or more data portions of the particular large data, and (“In act 812, the TE receives one or more atoms requested in act 810. In act 808, the TE performs atom-to-SQL mapping to construct a result set that comports to the requirements of the client (e.g., a SQL-compatible result set), and communicates the constructed result set to the client.”)(e.g., figures 8a and 8b and col 30 lines 20-25) storing the one or more data portions of the particular large data in storage of the receiving database server. (“In addition, it should be appreciated in light of this disclosure that virtually any database node in the transaction tier 107 and/or the persistence tier 109 could be utilized by the TE 106a, as atoms can be requested from any peer node having the requested atoms in a respective atom cache or durable storage, as the case may be. In such cases, retrieved atoms, and those atoms already present in the atom cache of the TE node 106a, can be utilized to service the query and return a result set, similar to act 808 discussed above with regard to FIG. 8a.”)(e.g., figures 8a and 8b and col 30 lines 55-64). Claims 13, 14, 16 and 18 have substantially similar limitations as stated in claims 2, 3, 6 and 18, respectively; therefore, they are rejected under the same subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Rice in view of Sun et al. (U.S. Publication No. 2025/0086181 A1, hereinafter referred to as “Sun”). Regarding claim 5, Rice discloses the method of claim 1. However, Rice does not appear to specifically disclose wherein the particular large data is a large object type (LOB) column, which is stored on the large data repository database server, and the object reference data is an LOB locator to a location of storage of the LOB column. On the other hand, Sun, which relates to high-performance large object operations (title), does disclose wherein the particular large data is a large object type (LOB) column, which is stored on the large data repository database server, and the object reference data is an LOB locator to a location of storage of the LOB column. (“The examples herein are directed to a practical application and provide significantly more than existing approaches to executing queries that call LOB columns. LOB columns are columns that contain large amounts of data either in Binary Format (BLOBs) or Character Format (CLOBs). Tables with LOB data can be processed like other data types, and LOB data can be edited and browsed like other data. The examples herein are directed to a practical application at least because the examples herein address a particular issue: the advantages of storing data in LOBs can be outweighed in database systems by the negative impacts referencing these LOBs in a query can have on the query performance. The computer-implemented methods, computer program products, and computer systems described herein provide an approach to generating an implicit column structure to store LOBs and to enable queries to access this implicit structure rather than LOB columns (e.g., LOB table space), to improve query performance.” “Because the program code of the classifier (e.g., FIG. 4, 400) determines whether a query that references a LOB (e.g., column) should access an implicit column or pull data from the LOB (e.g., column), as stored in the database, the program code can train the classifier with an implicit column usage knowledge base to establish a performance benchmark for revising a query to utilize implicit columns and/or to establish a performance benchmark for a query to utilize the LOB in the database to return query results.”)(e.g., figure 5 and paragraphs [0021] and [0053]). Rice discloses table partitioning within distributed database systems. E.g., title. In Rice, the distributed database system assigns each storage group to a subset of storage manager (SM) nodes and symbolic mapping is used to allow transactions to identify a particular storage group. However, Rice does not appear to specifically disclose that the particular large data is a large object type (LOB) column, which is stored on the large data repository database server, and the object reference data is an LOB locator to a location of storage of the LOB column. On the other hand, Sun, which also to large data sets does provide that it is known that large datasets can include LOB columns and identifiers to access the LOB columns. Sun discloses “(b)ased on the iterative testing, the program code determines a benchmark for either utilizing an implicit column or pulling results directly from the database, for a query referencing a given LOB. The program code utilizes the implicit column usage base to predict and apply the one or more machine learning algorithms trained to sample data, to determine whether to generate and/or reference an implicit column when a given LOB is referenced in a query.” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to incorporate the LOB columns as the type of large data being accessed and identifiers to Rice to improve the manner on how to access the data of Rice and to improve the manner by allowing for either direct access to the LOB column or using implicit columns to access the large data. Claim 15 has substantially similar limitations as stated in claim 5; therefore, it is rejected under the same subject matter. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Rice in view of Lilley et al. (U.S. Publication No. 2019/0236403 A1, hereinafter referred to as “Lilley”). Regarding claim 7, Rice discloses the method of claim 1. Rice further discloses wherein the object reference data includes a size of the particular large data, the method further comprising: (“The size of the atom cache can be user-configurable or sized to utilize all available memory space on a host computer, depending upon a desired configuration. “)(e.g., col 10 lines 34-37); however, Rice does not appear to specifically disclose comparing the size of the particular large data to buffer memory threshold; if it is determined that the size of the particular large data exceeds the buffer memory threshold, storing the one or more data portions of the particular large data in a disk storage of the storage of the receiving database server; if it is determined that the size of the particular large data fails to exceed the buffer memory threshold, storing the one or more data portions of the particular large data in a buffer memory storage of the storage of the receiving database server. On the other hand Lilley, which relates to systems and method for converting massive point cloud datasets to a hierarchical storage format (title), does disclose comparing the size of the particular large data to buffer memory threshold; if it is determined that the size of the particular large data exceeds the buffer memory threshold, storing the one or more data portions of the particular large data in a disk storage of the storage of the receiving database server; if it is determined that the size of the particular large data fails to exceed the buffer memory threshold, storing the one or more data portions of the particular large data in a buffer memory storage of the storage of the receiving database server. (“The data point position is checked against the bounding volumes of the nodes at that cache level. Data points may be inserted into their respective caches based on a position check. For example, for each data point the method determines which cache the point belongs to and inserts each data point into cache including appending to the cache buffer. When the cache buffer exceeds a memory threshold (e.g., buffer memory is above a threshold of 32 MB or 64 MB) the buffer is written to disk.”)(e.g., paragraph [0025]). Rice discloses table partitioning within distributed database systems. E.g., title. In Rice, the distributed database system assigns each storage group to a subset of storage manager (SM) nodes and symbolic mapping is used to allow transactions to identify a particular storage group. However, Rice does not appear to specifically disclose comparing the size of the particular large data to buffer memory threshold and either storing to the buffer memory or the cache based on the result. On the other hand, Lilley provides that it is known to store data either on the cache or in the disk based on a threshold size. This ensures good performance and stability of the cache. Therefore, it would have been obvious to one of ordinary skill in the art before the filing date of Applicant’s claimed invention to incorporate the size of the data to a buffer memory threshold as disclosed in Lilley to Rice to ensure good performance and stability of the data maintained at the cache. Claim 17 has substantially similar limitations as stated in claim 7; therefore, it is rejected under the same subject matter. Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rice in view of Zhang et al. (U.S. Publication No. 2017/0364534 A1, hereinafter referred to as “Zhang”). Regarding claim 10, Rice discloses the method of claim 1. Rice discloses further discloses “(t)he SQL clients 102 can be implemented as, for example, any application or process that is configured to construct and execute SQL queries. For instance, the SQL clients 102 can be user applications implementing various database drivers and/or adapters including, for example, Java database connectivity (JDBC), open source database connectivity (ODBC), PHP data objects (PDO), or any other database driver that is configured to communicate and utilize data from a relational database.” (e.g., col 6 lines 29-37); however, Rice does not appear to specifically disclose wherein the particular large data includes extensible markup language (XML) data or JavaScript Object Notation (JSON) data. On the other hand, Zhang, which relates to platform, system and process for distributed graph databases and computing (title), does disclose wherein the particular large data includes extensible markup language (XML) data or JavaScript Object Notation (JSON) data. (“This enables good interoperability to existing relational databases (mysql, Oracle, etc.) and Hadoop stack technology tools (Hive, MapReduce, Spark, etc.) that manages and processes data in structured formats with no extra cost in data mappings. If the system 100 only allows data to be stored in graph format, such as the internal binary format used by another graph database called Neo4j, and does not allow SQL query over structured data, then users might have to make duplicated copies of their data in different formats in order to use different software for data analysis. For example, one in graph format and another in structured format (csv, tsv, etc.) and yet another in JSON format, introducing complex meta-data management, ETL and data synchronization issues and costs.”)(e.g., paragraphs [0047] and [0077]). Rice discloses table partitioning within distributed database systems. E.g., title. In Rice, the distributed database system assigns each storage group to a subset of storage manager (SM) nodes and symbolic mapping is used to allow transactions to identify a particular storage group. However, Rice does not appear to specifically disclose wherein the particular large data includes extensible markup language (XML) data or JavaScript Object Notation (JSON) data. On the other hand, Zhang provides that it is beneficial for hybrid querying to include multiple kinds of data in responding to queries that include JSON data to further enhance the manner in which data can be retrieved and processed. Therefore, it would have obvious to one of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to incorporate the use of multiple types of data that includes JSON data as disclosed in Zhang to Rice to further provide the benefits for data to be accessed and processed in a more effective manner that employs hybrid querying techniques. Claim 19 has substantially similar limitations as stated in claim 10; therefore, it is rejected under the same subject matter. Allowable Subject Matter Claims 11 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Subject to claim 11 further positively reciting a determining step to ensure the contingent limitations of claim 1 become required, claim 11 is objected to as allowable, because it requires specific details of the particular type of database servers, along with further clarifications of the method that are not found in the prior art alone, or in combination. Claim 20 contains substantially similar limitations as stated in claim 11; therefore, it is objected to for similar reasons as provided with respect to claim 11. Conclusion The prior art made of record, listed on form PTO-892, and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD L BOWEN whose telephone number is (571)270-5982. The examiner can normally be reached Monday through Friday 7:30AM - 4:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571)270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD L BOWEN/Primary Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Mar 11, 2025
Application Filed
Mar 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602365
Method for Transmitting a Bloom Filter From a Transmitter Unit to a Receiver Unit
2y 5m to grant Granted Apr 14, 2026
Patent 12597044
TRANSFORMING QUALITATIVE SURVEY INTO QUANTITATIVE SURVEY USING DOMAIN KNOWLEDGE AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596752
INFORMATION PROCESSING APPARATUS, CONTENT GENERATION SYSTEM, AND CONTROL METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12585921
NODE SELECTION APPARATUS AND METHOD FOR MAXIMIZING INFLUENCE USING NODE METADATA IN NETWORK WITH UNKNOWN TOPOLOGY
2y 5m to grant Granted Mar 24, 2026
Patent 12585699
SYSTEM, METHOD, AND COMPUTER PROGRAM FOR MULTIMODAL VIDEO RETRIEVAL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+27.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 544 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month