Prosecution Insights
Last updated: April 19, 2026
Application No. 17/589,438

GRAPH-BASED MEMORY STORAGE

Non-Final OA §103
Filed
Jan 31, 2022
Examiner
CHOI, CHARLES J
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
82%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
259 granted / 314 resolved
+27.5% vs TC avg
Moderate +6% lift
Without
With
+5.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
7 currently pending
Career history
321
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 314 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/15/2025 has been entered. Allowable Subject Matter Claim(s) 2, 7, 10, 17, 18, 21 and 25 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-6, 8, 9, 15, 16, 22-24, 26 and 31-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gupta (US 2020/0371761) in view of Kvalnes (US 2019/0005071) and Armbrust (US 2022/0309104). Regarding claim(s) 1, 8, 15 and 24 Gupta teaches: One or more processors, comprising: circuitry to: use a graph corresponding to an execution of a software program […] and compile the software program to cause, [0032] A compiler converts the source code into a bitstream and binary code which configures programmable logic and software-configurable hardened logic in a heterogeneous processing system of a SoC to execute the graph. Gupta does not explicitly teach, but Kvalnes teaches the one or more elements to be stored in a contiguous range of memory. [0006] the operations include determining a location in memory for storing the at least one entity, allocating the number of records in a contiguous block of records at the location in memory, and storing the at least one entity in the allocated contiguous block of records. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the programming environment of Gupta with the graph based data storage optimization system/method of Kvalnes. The motivation for doing so would have been to provide high performance in returning search information by efficiently traversing information stored in the graph, as taught by Kvalnes in [0022]. Kvalnes in [0085] further shows that fragmentation may be minimized and the performance of the graph database may be optimized. The combination of Gupta and Kvalnes does not explicitly teach, but Armbrust teaches to identify one or more elements of a data structure to be generated in discrete elements by execution of corresponding different parts of the software program; [0037] The dataflow graph is comprised of a plurality of flows that read data from one or more inputs, transform the data using relational or functional operations, and write the results to destination. Inputs to a flow can either be a source or a table. A flow can write its output to a sink (e.g., a function that takes a two-dimensional labeled data structure (DataFrame) and writes it to an external system) or a table. A sink is used to write the results of a flow to an external system (e.g., warehouse directory). A table can act as a source or a sink for a streaming query. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the programming environment of Gupta and Kvalnes with the data flow graph processing system/method of Armbrust. The motivation for doing so would have been to reduces the amount of time and system resources (e.g., processing resources, storage resources, network resources, etc.) needed by the data management system, as taught by Armbrust in [0020]. Regarding claim 3, Armbrust teaches: wherein the one or more elements of the data structure is to be determined as a result of applying one or more rules to one or more data items representing edges of the graph. [0054] Dataflow graph 200 includes flows 211, 212, 213, 214, 215, 216, 217, 218, 219. A flow reads data from one or more inputs, transforms the data using relational or functional operations, and writes the results to a destination. Regarding claim 4, Gupta teaches: to store the one or more elements of the data structure for use by one or more computational operations to be performed by one or more kernels. [0032] A compiler converts the source code into a bitstream and binary code which configures programmable logic and software-configurable hardened logic in a heterogeneous processing system of a SoC to execute the graph… the compiler can use the graph expressed in source code to determine which kernels to assign to programmable logic blocks and which to assign to hardened logic blocks. Further, the compiler can, using the parameters provided in the graph source code, select the specific communication techniques to establish the communication links between the kernels (e.g., shared memory, windowing, direct memory access (DMA), etc.). Kvalnes teaches wherein the contiguous range of memory are consecutive memory locations [0006] the operations include determining a location in memory for storing the at least one entity, allocating the number of records in a contiguous block of records at the location in memory, and storing the at least one entity in the allocated contiguous block of records. Regarding claim 5, Armbrust teaches: wherein the graph comprise nodes indicating one or more respective operations on one or more sets of data and edges indicating the one or more respective sets of data to be used by one or more computational operations. [0052] Dataflow graph 200 includes source nodes 202, 204, 206, 208. A source is a node in the graph that represents data read from a storage system external to the graph (e.g., data storage 142 of FIG. 1). A source may be expressed as a named function that returns a two-dimensional labeled data structure (e.g., DataFrame). Regarding claim 6, Armbrust teaches: wherein the circuitry is to cause a compiler to apply one or more rules to the graph [0054] Dataflow graph 200 includes flows 211, 212, 213, 214, 215, 216, 217, 218, 219. A flow reads data from one or more inputs, transforms the data using relational or functional operations, and writes the results to a destination. to determine one or more sets of data used by one or more computational operations indicated by the graph, the one or more sets of data comprising the one or more elements of the data structure to be stored. [0068] Flow 216 transforms data associated with dataset 222 and outputs the results to dataset 228. Flow 216 is dependent upon flows 211, 212 because flow 216 is unable to output its results to dataset 228 until both flow 211 and flow 212 output their corresponding results to dataset 222. Flow 217 transforms data associated with table 224 and outputs the results to dataset 226. Flow 217 is dependent upon flows 213, 214 because flow 217 is unable to output its results to dataset 226 until both flow 213 and flow 214 output their corresponding results to dataset 224. Flow 219 transforms data associated with dataset 228 and writes the results to sink 232. Flow 219 is dependent upon flows 216, 218 because flow 219 is unable to output its results to sink 232 until both flow 216 and flow 218 output their corresponding results to dataset 228. Kvalnes teaches in the contiguous range of memory [0006] the operations include determining a location in memory for storing the at least one entity, allocating the number of records in a contiguous block of records at the location in memory, and storing the at least one entity in the allocated contiguous block of records. Regarding claim 9, the combination of Kvalnes and Armbrust teaches: wherein the contiguous range of memory is usable to store one or more sets of data comprising the one or more elements of the data structure. Kvalnes in [0006] the operations include determining a location in memory for storing the at least one entity, allocating the number of records in a contiguous block of records at the location in memory, and storing the at least one entity in the allocated contiguous block of records. Armbrust in [0037] The dataflow graph is comprised of a plurality of flows that read data from one or more inputs, transform the data using relational or functional operations, and write the results to destination. Inputs to a flow can either be a source or a table. A flow can write its output to a sink (e.g., a function that takes a two-dimensional labeled data structure (DataFrame) and writes it to an external system) or a table. A sink is used to write the results of a flow to an external system (e.g., warehouse directory). A table can act as a source or a sink for a streaming query. Regarding claim 16, the combination of Gupta and Armbrust teaches: wherein the graph is to be generated by a compiler based, at least in part, on source code indicated to the compiler comprising one or more computational operations to be performed using the one or more elements of the data structure. Gupta in [0032] A compiler converts the source code into a bitstream and binary code which configures programmable logic and software-configurable hardened logic in a heterogeneous processing system of a SoC to execute the graph. Armbrust in [0037] The dataflow graph is comprised of a plurality of flows that read data from one or more inputs, transform the data using relational or functional operations, and write the results to destination. Inputs to a flow can either be a source or a table. A flow can write its output to a sink (e.g., a function that takes a two-dimensional labeled data structure (DataFrame) and writes it to an external system) or a table. A sink is used to write the results of a flow to an external system (e.g., warehouse directory). A table can act as a source or a sink for a streaming query. Regarding claim 22, Armbrust teaches: wherein the graph comprise one or more nodes indicating one or more computational operations and one or more edges indicating one or more sets of data, and the one or more elements of the data structure is to be determined by a compiler based, at least in part, on the one or more nodes and the one or more edges. [0037] The dataflow graph is comprised of a plurality of flows that read data from one or more inputs, transform the data using relational or functional operations, and write the results to destination. Inputs to a flow can either be a source or a table. A flow can write its output to a sink (e.g., a function that takes a two-dimensional labeled data structure (DataFrame) and writes it to an external system) or a table. A sink is used to write the results of a flow to an external system (e.g., warehouse directory). A table can act as a source or a sink for a streaming query. Regarding claim 23, Armbrust teaches: further comprising instructions which, if performed by the one or more processors, cause the one or more processors to determine one or more sets of data indicating the one or more elements of the data structure, the one or more sets of data determined in response to applying one or more rules to one or more edges of the graph. [0037] The dataflow graph is comprised of a plurality of flows that read data from one or more inputs, transform the data using relational or functional operations, and write the results to destination. Inputs to a flow can either be a source or a table. A flow can write its output to a sink (e.g., a function that takes a two-dimensional labeled data structure (DataFrame) and writes it to an external system) or a table. A sink is used to write the results of a flow to an external system (e.g., warehouse directory). A table can act as a source or a sink for a streaming query. Regarding claim 26, Armbrust teaches: further comprising determining the one or more elements of the data structure based, at least in part, on applying one or more rules to one or more sets of data indicated by the graph, the one or more sets of data indicating data to be used by one or more computational operations of the graph. [0054] Dataflow graph 200 includes flows 211, 212, 213, 214, 215, 216, 217, 218, 219. A flow reads data from one or more inputs, transforms the data using relational or functional operations, and writes the results to a destination. Regarding claim 31, Kvalnes teaches: wherein the contiguous range of memory is to store a set of data representing the one or more elements of the data structure. [0006] the operations include determining a location in memory for storing the at least one entity, allocating the number of records in a contiguous block of records at the location in memory, and storing the at least one entity in the allocated contiguous block of records. Regarding claim 32, Gupta teaches: wherein the one or more elements comprise information to be generated by different threads to be performed in parallel. [0063] While various types of dataflow graphs can be used, in one embodiment, the semantics of the graph 440 established by the graph source code 420 is based upon the general theory of Kahn Process Networks which provides a computation model for deterministic parallel computation that is applied to the heterogeneous architecture in the SoC 100 (which includes both programmable and hardened blocks). Regarding claim 33, Armbrust teaches: wherein the execution of the corresponding different parts of the software program generates elements of the same data structure. [0037] The dataflow graph is comprised of a plurality of flows that read data from one or more inputs, transform the data using relational or functional operations, and write the results to destination. Inputs to a flow can either be a source or a table. A flow can write its output to a sink (e.g., a function that takes a two-dimensional labeled data structure (DataFrame) and writes it to an external system) or a table. A sink is used to write the results of a flow to an external system (e.g., warehouse directory). A table can act as a source or a sink for a streaming query. Claim(s) 11-14, 19, 20 and 27-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gupta (US 2020/0371761), Kvalnes (US 2019/0005071) and Armbrust (US 2022/0309104), further in view of Lin (US 2020/0202246). Regarding claim 11, Lin teaches: wherein the one or more elements of the data structure comprises one or more sets of tensor data to be stored in the contiguous range of memory. [0176] The information that is used to indicate the memory type of the data flow graph parameter carried by the connection edge is written into the Tensor data structure. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the programming environment of Gupta, Kvalnes and Armbrust with the distributed graph computing system/method of Lin. The motivation for doing so would have been to improve data computing efficiency by using a data flow graph as a computing object, dividing the data flow graph into a plurality of subgraphs or copies, and deploying the plurality of subgraphs or copies on a plurality of computing nodes in the distributed computing system, so that the plurality of computing nodes may be used to perform collaborative computing on the data flow graph. This is taught Lin in [0003]. Regarding claim 12, Lin teaches: wherein the one or more processors are to cause a compiler to partition the graph into one or more subgraphs and apply one or more rules to the one or more subgraphs to determine one or more coordinate sets indicating the one or more elements of the data structure, the compiler to generate one or more kernels using at least the one or more subgraphs and the one or more coordinate sets. [0066] On the machine learning platform, computational logic of an algorithm is usually expressed in a form of a data flow graph. When the machine learning platform is used to compute the data flow graph, code needs to be first used to describe the data flow graph. The data flow graph is defined in this process. After the definition of the data flow graph is completed, the code is compiled. When the data flow graph is computed, the compiled code is read and executed Regarding claim 13, Lin teaches: wherein the graph comprise nodes indicating one or more computational operations and edges indicating one or more sets of data to be used by the one or more computational operations, and the one or more elements of the data structure is to be determined based, at least in part, on the nodes and edges. Fig. 1 and [0077] When the data flow graph is run, the nodes A, B, and D represent storage locations of input variables, the node C represents a storage location of a result of the addition operation (“one or more respective operations”), and E represents a storage location of a result of the multiplication operation (“one or more respective operations”). A storage location represented by a node may be mapped to an address that is used to store data and that is in a physical device such as a hard disk, a memory, or a CPU register. Regarding claim 14, Lin teaches: wherein the one or more elements of the data structure is to be stored in the contiguous range of memory as a result of one or more graphics processing units (GPUs) performing one or more kernels generated, based at least in part, on the graph. [0082] Kernel function code in FIG. 2 is loaded, for running, into a host memory and a GPU memory that correspond to a process. The kernel function code is used to implement a plurality of kernel functions for expressing local computational logic, and may be understood as a kernel function library including the plurality of kernel functions. The kernel function is used to represent some relatively complex logical operation rules, and may be invoked by a node in the data flow graph. Regarding claim 19, Lin teaches: wherein the contiguous range of memory is in memory of a graphics processing unit (GPU) to be used by one or more kernels [0082] Kernel function code in FIG. 2 is loaded, for running, into a host memory and a GPU memory that correspond to a process. The kernel function code is used to implement a plurality of kernel functions for expressing local computational logic, and may be understood as a kernel function library including the plurality of kernel functions. to store vectorized data representing the one or more elements of the data structure as a result of one or more computations indicated by the graph. [0082] The kernel function is used to represent some relatively complex logical operation rules, and may be invoked by a node in the data flow graph. For example, the kernel function may be a matrix operation such as point multiplication or vector multiplication, or a convolution operation. Regarding claim 20, Lin teaches: wherein the one or more elements of the data structure comprises one or more sets of vectorized data to be stored in the contiguous range of memory. [0082] The kernel function is used to represent some relatively complex logical operation rules, and may be invoked by a node in the data flow graph. For example, the kernel function may be a matrix operation such as point multiplication or vector multiplication, or a convolution operation. Regarding claim 27, Lin teaches: further comprising generating one or more kernels that, if performed, store the one or more elements of the data structure in the contiguous range of memory, the one or more kernels generated based, at least in part, on the graph. [0082] Kernel function code in FIG. 2 is loaded, for running, into a host memory and a GPU memory that correspond to a process. The kernel function code is used to implement a plurality of kernel functions for expressing local computational logic, and may be understood as a kernel function library including the plurality of kernel functions. The kernel function is used to represent some relatively complex logical operation rules, and may be invoked by a node in the data flow graph. For example, the kernel function may be a matrix operation such as point multiplication or vector multiplication, or a convolution operation. Regarding claim 28, Lin teaches: further comprising generating one or more sets of data based, at least in part, on one or more computational operations indicated by the graph and generating one or more kernels based, at least in part, on the one or more sets of data and the graph. [0082] Kernel function code in FIG. 2 is loaded, for running, into a host memory and a GPU memory that correspond to a process. The kernel function code is used to implement a plurality of kernel functions for expressing local computational logic, and may be understood as a kernel function library including the plurality of kernel functions. The kernel function is used to represent some relatively complex logical operation rules, and may be invoked by a node in the data flow graph. For example, the kernel function may be a matrix operation such as point multiplication or vector multiplication, or a convolution operation. Regarding claim 29, Lin teaches: further comprising generating the graph based, at least in part, on one or more source code files input to a compiler, the source code files indicating one or more computational operations to be performed using the one or more elements of the data structure. [0066] On the machine learning platform, computational logic of an algorithm is usually expressed in a form of a data flow graph. When the machine learning platform is used to compute the data flow graph, code needs to be first used to describe the data flow graph. The data flow graph is defined in this process. After the definition of the data flow graph is completed, the code is compiled. When the data flow graph is computed, the compiled code is read and executed. Regarding claim 30, Lin teaches: wherein the contiguous range of memory are memory locations of a graphics processing unit (GPU) to store vectorized data, and the GPU is to store the one or more elements of the data structure in the contiguous range of memory as a result of executing one or more kernels generated based, at least in part, on the graph. [0082] Kernel function code in FIG. 2 is loaded, for running, into a host memory and a GPU memory that correspond to a process. The kernel function code is used to implement a plurality of kernel functions for expressing local computational logic, and may be understood as a kernel function library including the plurality of kernel functions. The kernel function is used to represent some relatively complex logical operation rules, and may be invoked by a node in the data flow graph. Response to Arguments Applicant’s arguments with respect to claim rejections under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES J CHOI whose telephone number is (571)270-0605. The examiner can normally be reached MON-FRI: 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROCIO DEL MAR PEREZ-VELEZ can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES J CHOI/Primary Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Jan 31, 2022
Application Filed
Jun 27, 2023
Non-Final Rejection — §103
Aug 02, 2023
Applicant Interview (Telephonic)
Aug 02, 2023
Examiner Interview Summary
Jan 03, 2024
Response Filed
Feb 06, 2024
Final Rejection — §103
Feb 13, 2024
Interview Requested
Mar 11, 2024
Examiner Interview Summary
Mar 11, 2024
Applicant Interview (Telephonic)
Aug 09, 2024
Notice of Allowance
Jan 21, 2025
Interview Requested
Jan 29, 2025
Applicant Interview (Telephonic)
Jan 29, 2025
Examiner Interview Summary
Feb 21, 2025
Request for Continued Examination
Feb 25, 2025
Response after Non-Final Action
Mar 07, 2025
Non-Final Rejection — §103
Apr 03, 2025
Interview Requested
Apr 09, 2025
Applicant Interview (Telephonic)
Apr 09, 2025
Examiner Interview Summary
Jul 11, 2025
Response Filed
Aug 11, 2025
Final Rejection — §103
Oct 15, 2025
Interview Requested
Dec 15, 2025
Request for Continued Examination
Jan 01, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103
Mar 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602334
ON-CHIP INTERCONNECT FOR MEMORY CHANNEL CONTROLLERS
2y 5m to grant Granted Apr 14, 2026
Patent 12596654
PROTECTION AGAINST TRANSLATION LOOKUP REQUEST FLOODING
2y 5m to grant Granted Apr 07, 2026
Patent 12580875
METHODS AND SYSTEMS FOR EXCHANGING NETWORK PACKETS BETWEEN HOST AND MEMORY MODULE USING MULTIPLE QUEUES
2y 5m to grant Granted Mar 17, 2026
Patent 12530299
SYSTEM AND METHOD FOR PRIMARY STORAGE WRITE TRAFFIC MANAGEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12524357
BUFFER COMMUNICATION FOR DATA BUFFERS SUPPORTING MULTIPLE PSEUDO CHANNELS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
82%
Grant Probability
88%
With Interview (+5.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 314 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month