Prosecution Insights
Last updated: April 17, 2026
Application No. 18/456,597

ACCELERATING THREE-DIMENSIONAL FINITE-DIFFERENCE TIME-DOMAIN ELECTROMAGNETIC SIMULATION USING A MIRRORED GPU DOMAIN

Final Rejection §103
Filed
Aug 28, 2023
Examiner
BADER, ROBERT N.
Art Unit
2611
Tech Center
2600 — Communications
Assignee
unknown
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
173 granted / 393 resolved
-18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
425
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-6, 10, 13-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over “CUDA-OpenGL Interoperability to Visualize Electromagnetic Fields Calculated by FDTD” by Veysel Demir, et al. (hereinafter Demir) in view of “Compute Unified Device Architecture (CUDA) Based Finite-Difference Time-Domain (FDTD) Implementation” by Veysel Demir and Atef Z. Elsherbeni (hereinafter Elsherbeni). Regarding claim 1, the limitations “A method for circumventing bottlenecking of graphics processing unit (GPU) parallelization during [two-dimensional (2D)] finite-difference time-domain (FDTD) simulation, the method performed by a central processing unit (CPU) of a computer system and comprising: before running the [2D] FDTD simulation: creating, by the CPU, a plurality of [2D] arrays; creating, by the CPU, a mirror set of the plurality of [2D] arrays; and storing, by the CPU, the mirror set on a GPU” are taught by Demir (Demir, e.g. abstract, sections I-IV, describes a system for performing 2D FDTD simulation using a CPU executing a main program for initializing and iterating through the simulation along with offloading the FDTD simulation field value updating processing to the GPU for parallel processing using threads executing one of the update kernels. Demir, e.g. section I, paragraph 4, section II B, subsection Copy FDTD Arrays to GPU memory, Listing 1, teaches that the GPU can perform both the FDTD calculations and display processing by initially copying the field data from the CPU memory to the GPU memory, i.e. during the initialization phase prior to performing the simulation, the coefficient and field arrays are created by the CPU in system RAM, and the function copyFdtdArraysToGpuMemory () causes the CPU to create and store copies of the coefficient and field arrays in the GPU RAM, corresponding to the claimed steps of creating the plurality of arrays, creating the mirror set of the arrays on the GPU and copying/storing the data from the plurality of arrays into the mirror set of arrays on the GPU. It is additionally noted with respect to claims 10 and 17 that Demir’s system is implemented by executing instructions stored on non-transitory media/memory, e.g. section III describes the exemplary computing system which relies on executing programs stored on a non-transitory medium/memory such as a hard disk.) The limitation “during runtime of the [2D] FDTD simulation: sending, from the CPU to the GPU, instructions to update arrays of interest of the mirror set on the GPU” is taught by Demir (Demir, e.g. section II A, paragraph 3, section II C, paragraph 1, figure 2, listings 1, 2, 6, 7, teaches that the CPU controls iteration using a glutMainLoop executing the display function runIterationAndDisplay, which causes one or more iterations of the FDTD simulation to be performed by the GPU using a plurality of GPU kernel programs, where each iteration includes steps for updating sources, electric fields, magnetic fields, application of boundary conditions, and capture of electromagnetic fields, performed by one or more of the kernel programs to update values in one or more of the arrays, e.g. as in the example of listing 7 for updating the electric field z component. That is, the CPU issues instructions to the GPU to execute the plurality of kernels each iteration, where each kernel execution instruction identifies one or more arrays of interest stored in the GPU memory that are updated by the corresponding kernel function, corresponding to the claimed step of the CPU sending instructions to the GPU to update arrays of interest of the mirror set on the GPU during runtime of the FDTD simulation.) The limitations “upon completion of the [2D] FDTD simulation, copying, by the CPU, the updated arrays of interest from the GPU to the CPU; and writing, by the CPU to an output file, the update arrays of interest; wherein the plurality of [2D] arrays complete a single round trip, onto the GPU before the [2D] FDTD simulation and off the GPU once the [2D] FDTD simulation has completed” are taught by Demir (Demir, e.g. section III C, paragraph 1, listing 6, indicates that when the time step reaches the target number of time steps, iteration is complete and the results are copied back to CPU memory from GPU memory with the fetchResultsFromGpuMemory function, followed by saving the field data to an output file using the saveSampledFieldsToFile function, i.e. as claimed, upon completion of the FDTD simulation, the CPU copies the updated arrays of interest from the GPU to the CPU and writes the updated arrays of interest to an output file. That is, Demir’s arrays complete the claimed single round trip, being copied to the GPU prior to executing the glutMainLoop function in listing 1, and not being copied off the GPU until the time_step is equal to the number_of_time_steps, while avoiding transferring the array data between the system and GPU memories, as in Demir, section I, paragraph 4, indicating that by performing the FDTD simulation field calculation and display processing on the GPU, it is possible to avoid the back and forth transfer of data between the host and device memories.) The limitations “A method for circumventing bottlenecking of graphics processing unit (GPU) parallelization during three-dimensional (3D) finite-difference time-domain (FDTD) simulation, the method performed by a central processing unit (CPU) of a computer system and comprising: before running the 3D FDTD simulation: creating, by the CPU, a plurality of 3D arrays; creating, by the CPU, a mirror set of the plurality of 3D arrays; and storing, by the CPU, the mirror set on a GPU; during runtime of the 3D FDTD simulation: sending, from the CPU to the CPU, instructions to update arrays of interest of the mirror set on the GPU; and upon completion of the 3D FDTD simulation, copying, by the CPU, the updated arrays of interest from the GPU to the CPU; and writing, by the CPU to an output file, the updated arrays of interest; wherein the plurality of 3D arrays complete a single round trip, onto the GPU before the 3D FDTD simulation and off the GPU once the 3D FDTD simulation has completed” are suggested by Demir (Demir, e.g. abstract, section IV, teaches that the implemented system is a 2D FDTD simulation, but suggests that the system can be extended to support 3D FDTD, although Demir does not disclose details of 3D FDTD. In the interest of compact prosecution, Elsherbeni is cited for disclosing said details of 3D FDTD implemented using a GPU and CUDA) However, this limitation is taught by Elsherbeni (Elsherbeni, cited as reference 7 in Demir, is a previous publication by the same authors disclosing a GPU/CUDA based implementation of 3D FDTD simulation, i.e. as noted Elsherbeni discloses the details of implementing 3D FDTD using a GPU which are not disclosed in Demir. Elsherbeni, e.g. section III, figure 2, describes the 3D FDTD formulation relying on a 3D grid of Yee cells, where analogous to Demir, Elsherbeni, e.g. sections IV, paragraph 1, teaches initialization of the arrays on the CPU prior to transfer to GPU memory, followed by iterating the FDTD loop using a plurality of kernels relying on either the xyz or xy mapping approaches. That is, Elsherbeni teaches that, as claimed, prior to performing the 3D FDTD simulation, the CPU creates the plurality of 3D arrays and creates the mirror set of 3D arrays on the GPU, and during the simulation sends instructions to the GPU to execute the kernels to update the arrays of interest, analogous to Demir’s 2D FDTD as discussed above. Further, Elsherbeni, e.g. sections IV, IV A, IV C, teaches that the arrays are 3D, i.e. although the arrays are linearized for access on the GPU, they still comprise 3D data and are indexed using 3D indices (i, j, k).) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Demir’s 2D FDTD simulation system to also perform Elsherbeni’s 3D FDTD simulation because Demir suggests extending the system to 3D FDTD in section IV, as well as because they are analogous systems directed to FDTD simulation using GPU field calculation processing and CUDA. In Demir’s modified system, both 2D and 3D FDTD simulations would be supported, such that the initialization phase executed prior to the GPU would correspond to the claimed CPU creating the plurality of 3D arrays and storing mirror sets of the 3D arrays on the GPU, and during the 3D FDTD simulation the CPU would send instructions to the GPU indicating the kernel functions to execute to calculate updated values for corresponding arrays of interest of the mirror set on the GPU, and upon completion of the 3D FDTD simulation, the CPU would cause the updated arrays of interest to be copied from the GPU to the CPU and then stored in an output file. Regarding claim 4, the limitations “identifying, by the CPU, the arrays of interest comprising at least one array of the plurality of 3D arrays that is use in the 3D FDTD simulation; identifying, by the CPU, a quantity of arrays in the arrays of interest; and identifying, by the CPU, a quantity of elements in the arrays of interest” are taught by Demir in view of Elsherbeni (As discussed in the claim 1 rejection above, Demir, e.g. section II A, paragraph 3, section II C, paragraph 1, figure 2, listings 1, 2, 6, 7, teaches that the CPU causes one or more iterations of the FDTD simulation to be performed by the GPU using a plurality of GPU kernel programs, where each iteration includes steps for updating sources, electric fields, magnetic fields, etc. performed by one or more of the kernel programs to update values in one or more of the arrays, e.g. as in the example of listing 7 for updating the electric field z component, or Elsherbeni listing 4 for updating all three dimensional components of the magnetic field. That is, the CPU issues instructions to the GPU to execute the plurality of kernels each iteration, where each kernel execution instruction identifies one or more arrays of interest stored in the GPU memory that are updated by the corresponding kernel function. Demir, e.g. section II, subsection Create an OpenGL Buffer, listing 7, and Elsherbeni, e.g. section II C, listing 4, both teach that the kernel functions include inputs for indicating the arrays being updated, Ez in Demir’s example, Hx, Hy, Hz in Elsherbeni’s example, and for the dimensions of the array(s), nxx in Demir’s example, nxx, nyy, nz in Elsherbeni’s example, i.e. for each kernel execution instruction the CPU, as claimed, identifies a quantity of at least one array of interest of the plurality of arrays, and the quantity of elements in the arrays of interest as indicated by the array dimensions. Further, with respect to the communicating step recited in claim 5, the CPU communicates to the GPU the quantity of arrays, the quantity of elements, and the instruction to update simulation data in the arrays of interest, i.e. the kernel function corresponds to an instruction to update simulation data in the arrays of interest, and includes said quantity of arrays of interest and quantity of elements in the arrays as inputs. Finally, it is noted that Applicant’s disclosure, with respect to the claimed quantities, does not indicate that the CPU transmits the numerical value, per se, of the quantity of arrays/elements, such that the claim is interpreted as requiring the CPU identifies the one or more arrays of interest, i.e. the input variables identifying the array(s) to be updated, which collectively correspond to a quantity of arrays of interest identified by the CPU, i.e. in Demir’s example the CPU identifies one array of interest to be updated and in Elsherbeni’s example the CPU identifies three arrays of interest to be updated.) Regarding claim 5, the limitations “wherein the instructions to update the arrays of interest of the mirror set comprise: communicating, by the CPU to the GPU, the quantity of arrays, the quantity of elements, and an instruction to update simulation data in the arrays of interest in any order according to operations of the 3D FDTD simulation, wherein updating the simulation data in the arrays of interest comprises simultaneously updating array elements of each 3D electromagnetic field component of each array in the arrays of interest” are taught by Demir in view of Elsherbeni (As discussed in the claim 4 rejection above, Demir, e.g. section II, subsection Create an OpenGL Buffer, listing 7, and Elsherbeni, e.g. section II C, listing 4, both teach that the kernel functions include inputs for indicating the arrays being updated, Ez in Demir’s example, Hx, Hy, Hz in Elsherbeni’s example, and for the dimensions of the array(s), nxx in Demir’s example, nxx, nyy, nz in Elsherbeni’s example, and further, with respect to the communicating step recited in claim 5, the CPU communicates to the GPU the quantity of arrays, the quantity of elements, and the instruction to update simulation data in the arrays of interest, i.e. the kernel function corresponds to an instruction to update simulation data in the arrays of interest, and includes said quantity of arrays of interest and quantity of elements in the arrays as inputs. Elsherbeni, e.g. section IV C, paragraph 3, indicates that the _syncthreads function is used to ensure that all of the threads in a block are synchronized, where both Demir’s example and Elsherbeni’s example synchronize the threads prior to writing updated values to the array(s) of interest, i.e. although the threads may arrive at the _syncthreads function line at different times, the threads are synchronized prior to updating the arrays of interest in parallel, corresponding to the claim requirement that the instruction to update simulation data in the arrays of interest occurs in any order, i.e. if the threads were operating in lockstep synchronization the _syncthreads function would be redundant, indicating that threads reach the synchronization function in any order, but by synchronizing the threads just before writing the updated array values, the updated array values are written approximately simultaneously, as claimed.) Regarding claim 6, the limitation “wherein the updated simulation data on the GPU is copied to RAM of the computer system only upon completion of the 3D FDTD simulation” is taught by Demir in view of Elsherbeni (Demir, e.g. section III C, paragraph 1, listing 6, indicates that when the time step reaches the target number of time steps, iteration is complete and the results are copied back to CPU memory from GPU memory with the fetchResultsFromGpuMemory function. Further, Demir, e.g. section I, paragraph 4, indicates that by performing the FDTD simulation field calculation and display processing on the GPU, it is possible to avoid the back and forth transfer of data between the host and device memories, i.e. as in listing 6, the updated simulation data is only copied back to the CPU RAM in response to the number of iterations reaching the target number of iterations indicating completion of the FDTD simulation. Finally, in Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the termination condition is indicating completion of the 3D FDTD simulation rather than Demir’s 2D FDTD simulation.) Regarding claims 10 and 17, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above. Regarding claims 13 and 19, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 4 above. Regarding claim 14, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 5 above. Regarding claim 15, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 6 above. Regarding claim 20, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claims 5 and 6 above. Claims 2, 3, 7, 8, 11, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over “CUDA-OpenGL Interoperability to Visualize Electromagnetic Fields Calculated by FDTD” by Veysel Demir, et al. (hereinafter Demir) in view of “Compute Unified Device Architecture (CUDA) Based Finite-Difference Time-Domain (FDTD) Implementation” by Veysel Demir and Atef Z. Elsherbeni (hereinafter Elsherbeni) as applied to claims 1, 10, and 17 above, and further in view of “Implementation of a Lattice Boltzmann kernel using the Compute Unified Device Architecture developed by nVIDIA” by Jonas Tolke (hereinafter Tolke). Regarding claim 2, the limitations “wherein creating the plurality of 3D arrays comprises: creating, by the CPU, a first 3D array, wherein creating the first 3D array comprises determining at least one of a plurality of grids; creating, by the CPU, a first pointer to the first 3D array; allocating, by the CPU, a first portion of random-access memory (RAM) of the computer system for the first 3D array; and storing, by the CPU in the RAM, the first 3D array in the first portion of the RAM identified by the first pointer” is implicitly taught by Demir in view of Elsherbeni (As discussed in the claim 1 rejection above, Demir, e.g. section I, paragraph 4, section II B, subsection Copy FDTD Arrays to GPU memory, Listing 1, teaches that the GPU can perform both the FDTD calculations and display processing by initially copying the field data from the CPU memory to the GPU memory, i.e. during the initialization phase prior to performing the simulation, the coefficient and field arrays are created by the CPU in system RAM, and the function copyFdtdArraysToGpuMemory () causes the CPU to create and store copies of the coefficient and field arrays in the GPU RAM, and in Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the plurality of arrays are 3D. While this corresponds to creating the plurality of 3D arrays by creating a plurality of grids, i.e. the 3D grid of Yee cells, e.g. Elsherbeni, section III, corresponds to a plurality of grids, one grid for each of the corresponding coefficient and field values, and one of ordinary skill in the art would have understood that initializing and copying the arrays of coefficient and field values to the GPU would involve the claimed steps of creating a pointer variable by the CPU for each (first) 3D array, allocating portions of system RAM for each (first) array, and storing the address of the corresponding allocated portion in each (first) pointer variable, as well as the steps recited in claim 3 for creating the mirror set on the GPU, i.e. one of ordinary skill in the art would know how to perform memory management in a CPU program using CUDA to perform processing on a GPU, neither Demir or Elsherbeni describe these details of CUDA memory management, and therefore in the interest of compact prosecution Tolke is cited for teaching these details of CUDA memory management.) However, these limitations are taught by Tolke (Tolke, e.g. abstract, sections 1-8, describes a system for performing a Lattice Boltzmann simulation using CUDA to perform calculations on a GPU. Tolke, e.g. section 3.2, describes details of CUDA, and in particular details of CUDA memory management, indicating that the function cudaMalloc is used for allocating a portion of GPU RAM and storing the address in a provided pointer variable devPtr, and the function cudaMemcpy is used to copy a source array pointed to by a source pointer variable to a destination array pointed to by a destination pointer, where the transfer may be from the CPU RAM and GPU RAM, i.e. host to device, or from the GPU RAM to the CPU RAM, i.e. device to host. Tolke, e.g. section 3.4, further describes a simple example CUDA program, which includes steps performed by the host/CPU to create a first pointer variable fH, which stores the address of an array allocated in a portion of CPU/system RAM using malloc, followed by using a nested for loop to initialize the values of the allocated array pointed to by fH, followed by using the cudaMalloc function to allocate a corresponding array in the GPU memory and store the address in the pointer variable f0, and finally using cudaMemcpy to copy the array pointed to by fH into the array pointed to by f0. That is, Tolke’s example shows that one of ordinary skill in the art would understand that in Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the initialization phase would include the steps of creating (first) pointer variables (analogous to Tolke’s fH) for each (first) 3D array, allocating system RAM for each (first) 3D array (analogous to Tolke’s use of the malloc() function), storing the address in the corresponding (first) pointer variable, and storing the corresponding grid values into each (first) 3D array identified by the corresponding (first) pointer variable (analogous to Tolke’s initialize host memory loop), corresponding to the limitations of claim 2. Further, Tolke’s example shows that one of ordinary skill in the art would understand that in Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the initialization phase would also include the steps of creating (second) pointer variables (analogous to Tolke’s f0) for mirroring each (first) 3D array, allocating a corresponding (first mirror) array in GPU memory for each (first) array (analogous to Tolke’s use of cudaMalloc() function), storing the address of the GPU array in the corresponding (second) pointer variable, and copying the data from the (first) 3D array(s) pointed to by the CPU RAM (first) pointer variables to the corresponding (first mirror) 3D array(s) in GPU memory pointed to by the corresponding (second) pointer variable, corresponding to the claim 3 limitations.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Demir’s 2D FDTD simulation system, extended to also perform Elsherbeni’s 3D FDTD simulation, using Tolke’s CUDA memory management technique for initializing and copying arrays of data for processing by the GPU because, as noted above, while one of ordinary skill in the art would know how to perform memory management in a CPU program using CUDA to perform processing on a GPU, Demir and Elsherbeni do not disclose details of CUDA memory management and Tolke does disclose these details. Further, as discussed above, one of ordinary skill in the art would have understood, as reinforced by Tolke’s discussion and example of CUDA memory management, that in implementing Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the initialization phase would include the claimed steps as recited in claim 2, i.e. creating (first) pointer variables (analogous to Tolke’s fH) for each (first) 3D array, allocating system RAM for each (first) 3D array (analogous to Tolke’s use of the malloc() function), storing the address in the corresponding (first) pointer variable, and storing the corresponding grid values into each (first) 3D array identified by the corresponding (first) pointer variable (analogous to Tolke’s initialize host memory loop). Regarding claim 3, the limitations “wherein creating the mirror set comprises: creating, by the CPU on the GPU, a second pointer for mirroring the first 3D array referred to by the first pointer; creating, by the CPU on the GPU, a first mirror 3D array pointed to by the second pointer; and copying, by the CPU to the GPU, data in the first 3D array pointed to by the first pointer to the first mirror 3D array pointed to by the second pointer” are taught by Demir in view of Elsherbeni and Tolke (As discussed in the claim 2 rejection above, one of ordinary skill in the art would have understood, as reinforced by Tolke’s discussion and example of CUDA memory management, that in implementing Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the initialization phase would include the claimed steps as recited in claims 2 and 3. More specifically with respect to claim 3, Tolke’s example shows that one of ordinary skill in the art would understand that in Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the initialization phase would include the steps of creating (second) pointer variables (analogous to Tolke’s f0) for mirroring each (first) 3D array, allocating a corresponding (first mirror) array in GPU memory for each (first) array (analogous to Tolke’s use of cudaMalloc() function), storing the address of the GPU array in the corresponding (second) pointer variable, and copying the data from the (first) 3D array(s) pointed to by the CPU RAM (first) pointer variables to the corresponding (first mirror) 3D array(s) in GPU memory pointed to by the corresponding (second) pointer variable, corresponding to the claim 3 limitations.) Regarding claim 7, the limitation “wherein copying the updated arrays of interest from the GPU to the CPU comprises: sending, by the CPU, an exit instruction to the GPU to copy the updated arrays of interest from the GPU to corresponding mirror arrays pointed to on the RAM” is taught by Demir in view of Tolke (Demir, e.g. section III C, paragraph 1, listing 6, indicates that when the time step reaches the target number of time steps, iteration is complete and the results are copied back to CPU memory from GPU memory with the fetchResultsFromGpuMemory function. Further, Tolke, e.g. section 3.2, memory management, indicates that copying data from the GPU to the CPU ram is performed using the cudaMemcpy() function receiving source and destination pointer variables pointing to allocated arrays, indicating that Demir’s function would use cudaMemcpy with the pointer(s) to the result field values, i.e. the claimed updated arrays of interest, as the source and pointer(s) to arrays allocated in CPU memory, i.e. corresponding mirror arrays, as the destination. It is noted that although the claim describes the instruction as an “exit” instruction, the claim requirement is for causing the copying, per se, to occur, rather than actually exiting the application, such that the fetchResultsFromGpuMemory function corresponds to the claimed instruction. It is additionally noted that Demir, listing 6, does use an exit function, i.e. Cleanup(EXIT_SUCCESS), following the memory transfer and saving to file.) Regarding claim 8, the limitation “wherein the plurality of grids comprise three-dimensional magnetic field grids and electric field grids” is taught by Demir in view of Elsherbeni (Demir, e.g. section II A, paragraph 1, section II B, subsection Copy FDTD Arrays to GPU memory, teaches that the field arrays include 2D magnetic field and electric field grids, and analogously Elsherbeni, section III, section IV, paragraph 1, figure 2, teaches using 3D magnetic field and electric field cell grids stored in the 3D arrays.) Regarding claim 11, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 2 above. Regarding claim 12, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above. Regarding claim 18, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claims 2 and 3 above. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over “CUDA-OpenGL Interoperability to Visualize Electromagnetic Fields Calculated by FDTD” by Veysel Demir, et al. (hereinafter Demir) in view of “Compute Unified Device Architecture (CUDA) Based Finite-Difference Time-Domain (FDTD) Implementation” by Veysel Demir and Atef Z. Elsherbeni (hereinafter Elsherbeni) in view of “Implementation of a Lattice Boltzmann kernel using the Compute Unified Device Architecture developed by nVIDIA” by Jonas Tolke (hereinafter Tolke) as applied to claim 8 above, and further in view of “The Finite-Difference Time-Domain Method for Electromagnetics with MATLAB Simulations” by Atef Z. Elsherbeni, et al. (hereinafter Elsherbeni’s Book). Regarding claim 9, the limitation “wherein the plurality of grids further comprise one or more of magnetic field sub-calculation grids, electric field sub-calculation grids, absorbing layer grids, Fourier transform grids, reflection and transmission sensor grids, displacement field grids, dispersion calculation grids, permittivity grids, and a material type grid for a materials system model” are taught by Demir in view of Elsherbeni and Elsherbeni’s Book (Elsherbeni, e.g. section III, paragraph 1, indicates that the FDTD field updating equations are based on material properties including permittivity, permeability, electric conductivity and magnetic conductivity parameter values, where the update equations also rely on various coefficients C. Elsherbeni refers to cited reference 3, i.e. Elsherbeni’s Book, for details of the FDTD formulation. Elsherbeni’s Book, e.g. section 1.3, describes 3D FDTD updating equations, where, e.g. page 15, figure 1.6, the permittivity, permeability, electric conductivity and magnetic conductivity parameters are stored in 3D arrays indexed the same way as the field components, and further computes corresponding 3D arrays of material coefficients used by the update equations, e.g. pages 19-22. Further, Elsherbeni’s Book, sections 3.1.1, 3.1.2, pages 47-50, figure 3.2, indicates that material types are assigned to objects which are used to define the material type at each cell of the 3D FDTD grid. That is, Elsherbeni’s Book indicates that the plurality of 3D arrays used by Elsherbeni’s 3D FDTD simulation include the claimed permittivity grid and material type grid, indicating that in Demir’s modified system extended to perform Elsherbeni’s 3D FDTD simulation, the plurality of grids include the claimed permittivity grid and material type grid. It is noted that although Elsherbeni’s Book is further relied on as an additional reference, this does not constitute a modification, per se, i.e. Elsherbeni indicates that the 3D FDTD formulation described in Elsherbeni’s Book was relied on for implementing the 3D FDTD simulation system, such that the modified version(s) of Demir’s system as discussed in the claims 1 and 2 rejections above would already include the claimed permittivity grid and material type grid.) Response to Arguments Applicant’s arguments, see page 9, filed 3/16/26, with respect to 35 U.S.C. 112 (b) rejections of claims 3, 12, and 18 have been fully considered and are persuasive. The 35 U.S.C. 112 (b) rejections of claims 3, 12, and 18 have been withdrawn. Applicant's arguments filed 3/16/26 have been fully considered but they are not persuasive. Applicant asserts that Demir teaches in every iteration “the CPU-side code maps and unmaps OpenGL pixel buffer objects to CUDA via cudaGraphicsMapResources and cudaGraphicsUnmapResources, so that GPU-computed field data can be passed to the OpenGL rendering pipeline for visualization”. First, the claims do not exclude the transfer of any data between the CPU RAM and the GPU RAM during simulation. Second, Applicant’s assertion does not show that the identified functions correspond to copying the claimed FDTD field data arrays between the CPU RAM and the GPU RAM during simulation, and Applicant’s assertion is not supported by any evidence that the cited functions are transferring data between the CPU RAM and the GPU RAM during simulation. That is, Applicant’s assertion is conclusory and lacks evidentiary support and therefore cannot be considered persuasive. Further contradicting Applicant’s assertion, Demir explicitly indicates part of the design of the system is to avoid transferring the field data between the CPU RAM and the GPU RAM during simulation, i.e. Demir, section 1, paragraph 4, explains “one can copy the field data from the graphics card memory (device memory) to the computer’s main memory (hose memory) that is processed by the CPU, process the data to create an image, and copy the image back to the GPU memory to display via OpenGL. It is possible to avoid the back and forth data transfer between the host and device memories, and perform all the processing required for the display on the graphics card by employing CUDA-OpenGL interoperability provided by CUDA.” (emphasis added). That is, while Applicant suggests that the field data is transferred between the CPU RAM and the GPU RAM during simulation in Demir’s system and that this is “Demir’s central contribution”, Demir explicitly contradicts Applicant’s assertion by indicating CUDA provides the benefit of not transferring the data back and forth during simulation. It is noted that Applicant’s remarks regarding Demir do not acknowledge this cited portion of Demir, or otherwise provide any explanation as to why this portion of Demir does not explicitly contradict Applicant’s assertions. Finally, Applicant’s assertions are also contradicted by Demir’s code listing, as discussed in the above claim 1 and 6 rejections. That is, Demir, e.g. section III C, paragraph 1, listing 6, indicates that when the time step reaches the target number of time steps, iteration is complete and the results are copied back to CPU memory from GPU memory with the fetchResultsFromGpuMemory function. Further, Demir, e.g. section I, paragraph 4, indicates that by performing the FDTD simulation field calculation and display processing on the GPU, it is possible to avoid the back and forth transfer of data between the host and device memories, i.e. as in listing 6, the updated simulation data is only copied back to the CPU RAM in response to the number of iterations reaching the target number of iterations indicating completion of the FDTD simulation. Therefore these assertions are not persuasive because Demir’s disclosed code avoids the back and forth transfer of data between the host and device memories during simulation, as claimed. It is noted that Applicant further argues that Demir does not disclose the claimed architecture, e.g. pages 14-15 of the remarks, based on Applicant’s disclosure. However, Applicant’s further remarks do not identify any aspect of the claimed architecture differing from Demir’s disclosed architecture, with exception of the 3D FDTD modification taught in view of Elsherbeni and suggested by Demir, i.e. Applicant’s remarks still fail to show that Demir teaches copying the arrays comprising the FDTD field data from the GPU RAM to the CPU RAM during the simulation processing. Applicant asserts that there is no proper motivation to combine the references, but appears to acknowledge that Demir’s disclosure does provide motivation to extend the 2D simulation to 3D, making this argument not persuasive. That is, Applicant’s argument goes on to repeat the above discussed erroneous assertions regarding Demir’s architecture, suggesting it would not be obvious to modify Demir’s architecture into the claimed architecture, but, as discussed above, modifying Demir’s architecture to extend the 2D simulation to 3D results in Applicant’s claimed architecture. Therefore, this argument is not persuasive. Applicant suggests that the specification “expressly identifies the CPU-GPU bottleneck in the 3D FDTD simulation as a “long-felt need” that prior art approaches failed to solve. Applicant is reminded of the factors required to show a “long-felt need” are not merely discussing prior art references which do not achieve equivalent performance results to the disclosed solution, but rather, as discussed in MPEP 716.04, evaluating a long-felt need requires not that the inventor recognized a problem that existed for a long time without a solution, but rather “objective evidence that an art recognized problem existed in the art for a long period of time without solution”, including establishing both that “the need must have been a persistent one that was recognized by those of ordinary skill in the art”, requiring determining “the date the problem is identified and articulated, and there is evidence of efforts to solve that problem, not as of the date of the most pertinent prior art references”, and “the long-felt need must not have been satisfied by another before the invention by the inventor”. Applicant’s citation to the specification does not identify prior art references establishing a particular date that the problem was both identified and articulated, and evidence of efforts to solve the problem since that particular date, and the need must not have been satisfied by another. In this case analysis is precluded by the lack of any evidence establishing any particular date that the problem was both identified and articulated, preventing consideration of whether the cited prior art references amount to evidence of efforts to solve the problem since that particular date, and also contradicted by the above discussion showing that Demir disclosed the solution of avoiding data transfer between the CPU and GPU memories during simulation prior to Applicant’s filing date. Therefore, this argument cannot be considered persuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT BADER/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Dec 05, 2025
Non-Final Rejection — §103
Mar 16, 2026
Response Filed
Apr 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586334
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586335
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12541916
METHOD FOR ASSESSING THE PHYSICALLY BASED SIMULATION QUALITY OF A GLAZED OBJECT
2y 5m to grant Granted Feb 03, 2026
Patent 12536728
SHADOW MAP BASED LATE STAGE REPROJECTION
2y 5m to grant Granted Jan 27, 2026
Patent 12505615
GENERATING THREE-DIMENSIONAL MODELS USING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
70%
With Interview (+26.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month