Prosecution Insights
Last updated: April 19, 2026
Application No. 17/422,282

System for Machine Learning-Based Acceleration of a Topology Optimization Process

Non-Final OA §103
Filed
Jul 12, 2021
Examiner
WU, NICHOLAS S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Georgia Tech Research Corporation
OA Round
3 (Non-Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
18 granted / 38 resolved
-7.6% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/11/2025 has been entered. Response to Arguments Applicant's arguments filed 08/11/2025 have been fully considered but they are not persuasive. Regarding the 103 rejections, applicant's arguments filed with respect to the prior art rejections have been fully considered but they are moot. Applicant has amended the claims to recite new combinations of limitations. Applicant's arguments are directed at the amendment. Please see below for new grounds of rejection, necessitated by Amendment. Additionally, on pg. 7 of “Remarks” applicant argues that the examiner used impermissible hindsight to combine the cited prior arts, however, the examiner disagrees. See the following: “Applicants may argue that the examiner’s conclusion of obviousness is based on improper hindsight reasoning. However, ‘[a]ny judgment on obviousness is in a sense necessarily a reconstruction based on hindsight reasoning, but so long as it takes into account only knowledge which was within the level of ordinary skill in the art at the time the claimed invention was made and does not include knowledge gleaned only from applicant’s disclosure, such a reconstruction is proper.’” (MPEP 2145 X A). Therefore, applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-9, and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Olhofer, et al., US Pre-Grant Publication 2014/0214370A1 (“Olhofer”) in view of Nguyen, et al., Non-Patent Literature “A computational paradigm for multiresolution topology optimization (MTOP)” (“Nguyen”) and further in view of Shivashankar, et al., US Pre-Grant Publication 2015/0278706A1 (“Shivashankar”) and Banga, et al., Non-Patent Literature “3D Topology Optimization Using Convolutional Neural Networks” (“Banga”). Regarding claim 1 and analogous claim 7, Olhofer discloses: A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam, (Olhofer, abstract, “In one aspect, a computer-assisted method for the optimization of the design of physical bodies, such as land, air and see vehicles and robots and/or parts thereof [A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam]”). the system comprising: a memory having a plurality of modules stored thereon; and a processor for executing the modules, (Olhofer, ⁋87, “A CPU of a processor may perform the calculations [and a processor for executing the modules,] and may include a main memory (RAM, ROM), a control unit, and an arithmetic logic unit (ALU). It may also address a specialized graphic processor, which may provide dedicated memory and processing capabilities for handling the computations needed [the system comprising: a memory having a plurality of modules stored thereon;].”). the modules comprising: a topology optimization module configured to compute state variables of the topology using a two-scale topology optimization for a number of optimization steps using design variables mapped to a fine-scale mesh and the state variables mapped to a coarse-scale mesh, (Olhofer, ⁋3, “In detail, the invention relates to the design of a physical structure obtained essentially by optimizing the topology respectively the layout [the modules comprising: a topology optimization module configured to]. It can especially be applied in all technical fields in which the optimization of the overall design/structure can be achieved by the adaptation of the design variables [using design variables] based on information about sub-parts of the structure.”, ⁋44, “The optimization can consist of two phases. Phase one can be a dual process comprising a first (or outer) optimization process, e.g. a machine learning or stochastic optimization, generating update strategies to a second (or inner) optimization process. The second process can be a topology optimization which is initialized by the outer process and uses the update strategies provided by the first process for the material redistribution. [using a two-scale topology optimization for a number of optimization steps]”, and ⁋30-31, “In the example of FIG. 1, when a two-dimensional design space is used, the element is a cell of the design space, i.e. a field of the mesh. In this case the sensitivity depends on local information computed by a physics simulation. FIG. 4 shows an example of local information related to a two-dimensional cell of the design space (cf. FIGS. 1 and 2). In this case the local information consists of the displacements u1 to u8 nodes defining the finite element and the design variable xi, where i is the index of the element, i.e. referring to the elemental position in the design space. In this invention local information may refer to (but not restricted to) displacements, strain, stress, energy, heat, flow, pressure or similar variables [compute state variables of the topology] depending on the physics of the problem.”). wherein the state variables are computed using finite element analysis based on a simulated load and boundary conditions on the objective design and are accumulated with corresponding design variables as history data; (Olhofer, ⁋24, “FIG. 1 shows an example for a topology optimization setup, exemplarily and schematically showing a finite element mesh [wherein the state variables are computed using finite element analysis], where each cell is a design variable, which can, e.g., be either zero or one. Boundary conditions like supports and loads are defined. [based on a simulated load and boundary conditions on the objective design]”, ⁋3, “In detail, the invention relates to the design of a physical structure obtained essentially by optimizing the topology respectively the layout. It can especially be applied in all technical fields in which the optimization of the overall design/structure can be achieved by the adaptation of the design variables based on information about sub-parts of the structure.”; adaptation of design variables is interpreted as historical data because adapting data means changing pre-existing data [and are accumulated with corresponding design variables as history data;]). a machine learning module comprising a machine learning-based model having a tunable number of hidden layers configured to: (Olhofer, ⁋82, “The functional relation by which an update strategy is represented, based on such local information and can be expressed by a model, e.g. by an artificial neural network [a machine learning module comprising a machine learning-based model having a tunable number of hidden layers configured to:]. Here, the outer process applies an optimization or learning method to improve the weights of the neural network depending on the feedback from the inner loop.”). determine a predicted sensitivity value related to the design variables using the trained machine learning-based model for each of a second number of optimization steps (NF); (Olhofer, ⁋93, “Update strategy—A functional relation used to compute sensitivity replacing update signals for material redistribution in topology optimization based on available local information. [determine a predicted sensitivity value related to the design variables]”, ⁋82, “The functional relation by which an update strategy is represented, based on such local information and can be expressed by a model, e.g. by an artificial neural network. [using the trained machine learning-based model]”, ⁋75, “Based on the quality function and the constraints the performance of the structure is evaluated (S104). If the structure satisfies the convergence criterion, e.g. when the optimization objective(s)/constraint(s) (S105) is/are met, the outer process is stopped as well (S106).” [for each of a second number of optimization steps (NF);]). execute an online update of the machine learning-based model using updated history data for a third number of optimization steps (Wu); (Olhofer, ⁋78, “After the phase one process has ended the final resulting update strategy can be utilized in phase two [execute an online update of the machine learning-based model]. In phase two the topology optimization of structures can be optimized by using the generated update strategy for the same or similar objective functions and constraints as has been used in the optimization of the update strategy in the first phase, but with different boundary conditions of the design space.” [using updated history data for a third number of optimization steps (Wu);]). update the design variables based on the predicted sensitivity value for each optimization step; (Olhofer, ⁋70, “The overall dual process starts with step S100. First, the outer process starts in step S101 with an initial set of update strategies (S102), or a set of update strategies [based on the predicted sensitivity value] which can be chosen randomly. These update strategies are then supplied to and used in the inner topology optimization process for the iterative redistribution of material [update the design variables… for each optimization step;], as illustrated in step S103.”). and recursively repeat the optimization steps until the updated design variables are within a tolerance of prior updated design variables… (Olhofer, ⁋75, “The resulting structure of the topology optimization is then returned to the outer process (S205). Based on the quality function and the constraints the performance of the structure is evaluated (S104). If the structure satisfies the convergence criterion, e.g. when the optimization objective(s)/constraint(s) (S105) is/are met, the outer process is stopped as well (S106).” [and recursively repeat the optimization steps until the updated design variables are within a tolerance of prior updated design variables…]). While Olhofer teaches using online machine learning models and finite element analysis to perform topology optimization, Olhofer does not explicitly teach: A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam, design variables mapped to a fine-scale mesh and the state variables mapped to a coarse-scale mesh, execute an initial training of the machine learning-based model using the history data for a first number of optimization steps (Wi); so as to find a structural topology that has the most stiffness under a prescribed load and boundary conditions of the design of the additive manufacturable cantilever beam; wherein the topology optimization module executes the two-scale optimization only prior to and during the first number of optimization steps (W1) that generate the history data for the initial training of the machine learning-based model and during optimization steps for a duration of the third number of steps (Wu) initiated periodically at an update frequency equal to the second number of optimization steps (NF) for generating the updated history data. Nguyen teaches design variables mapped to a fine-scale mesh and the state variables mapped to a coarse-scale mesh, (Nguyen, pg. 528 col. 1-2 and Figure 2, “In our proposed scheme, the element densities are computed from the design variables by projection functions…To obtain high resolution design, we employ a finer density mesh [design variables mapped to a fine-scale mesh] than the displacement mesh [and the state variables mapped to a coarse-scale mesh,] so that each displacement element consists of a number of density elements (sub-elements). Within each density element, the material density is assumed to be uniform.”). Olhofer and Nguyen are both in the same field of endeavor (i.e. topology optimization). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Olhofer and Nguyen to teach the above limitation(s). The motivation for doing so is that using a mix of coarse and fine meshes reduces computational cost (cf. Nguyen, pg. 526 col. 1, “In this study, the analysis is performed on a coarser finite element mesh, optimization is performed on a fine design variable mesh, and element densities are defined on a finer mesh. Therefore, the total computational cost is reduced compared to uniformly using fine meshes.”). While Olhofer in view of Nguyen teaches using machine learning models and fine/coarse meshes to perform topology optimization, the combination does not explicitly teach: A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam, execute an initial training of the machine learning-based model using the history data for a first number of optimization steps (Wi); so as to find a structural topology that has the most stiffness under a prescribed load and boundary conditions of the design of the additive manufacturable cantilever beam; wherein the topology optimization module executes the two-scale optimization only prior to and during the first number of optimization steps (W1) that generate the history data for the initial training of the machine learning-based model and during optimization steps for a duration of the third number of steps (Wu) initiated periodically at an update frequency equal to the second number of optimization steps (NF) for generating the updated history data. Shivashankar teaches: execute an initial training of the machine learning-based model using the history data for a first number of optimization steps (Wi); (Shivashankar, ⁋28, “The present disclosure is concerned with predictive analytics, and more specifically with updating the functions (e.g., models) used to make predictions. The updating may include performing both online learning and offline learning. As an example, the offline learning may initially generate with a set of training data an offline function; the initial training of the offline model is interpreted as training for a first number of steps [execute an initial training of the machine learning-based model using the history data for a first number of optimization steps (Wi);], which may be used to bootstrap an online function.”). wherein the topology optimization module executes the two-scale optimization only prior to and during the first number of optimization steps (W1) that generate the history data for the initial training of the machine learning-based model (Shivashankar, ⁋28, “The present disclosure is concerned with predictive analytics, and more specifically with updating the functions (e.g., models) used to make predictions. The updating may include performing both online learning and offline learning. As an example, the offline learning may initially generate with a set of training data an offline function; the initial training of the offline model is interpreted as training for a first number of steps [wherein the topology optimization module executes the two-scale optimization only prior to and during the first number of optimization steps (W1) that generate the history data for the initial training of the machine learning-based model], which may be used to bootstrap an online function.”). and during optimization steps for a duration of the third number of steps (Wu) initiated periodically at an update frequency equal to the second number of optimization steps (NF) for generating the updated history data. (Shivashankar, ⁋42-43, “FIG. 3 further shows that the online function may be updated periodically using offline learning [and during optimization steps for a duration of the third number of steps (Wu) initiated periodically]. The update may be performed at time t=t1, after a sufficient number of samples of additional training data have been collected…While the offline learning occurs between t1 and t2, online learning and sampling of training data may be occurring as well, such as using the function λ*θ(Knew). When the offline learning is complete at t2, the online function may be updated by setting it equal to the updated offline function.; setting the online equal to the offline is interpreted as copying the update frequency as a snapshot is a copy of a model [at an update frequency equal to the second number of optimization steps (NF) for generating the updated history data.]”). Olhofer, in view of Nguyen, and Shivashankar are both in the same field of endeavor (i.e. topology optimization). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Olhofer, in view of Nguyen, and Shivashankar to teach the above limitation(s). The motivation for doing so is that incorporating offline and online training together improves latency and accuracy of models (cf. Shivashankar, ⁋6, “The latency and accuracy of predictive analytics may be improved by combining offline learning and online learning.”). While Olhofer in view of Nguyen and Shivashankar teaches using online machine learning models with periodic model updates for topology optimization, the combination does not explicitly teach: A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam, so as to find a structural topology that has the most stiffness under a prescribed load and boundary conditions of the design of the additive manufacturable cantilever beam; Banga teaches: A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam, (Banga, pg. 6, “Four typical real-world scenarios, as depicted in Figure 3 below, were used to sample the displacement boundary constraint cases: 1. Cantilever Beam [A system for accelerating topology optimization of a design of an additive manufacturable cantilever beam,]”). so as to find a structural topology that has the most stiffness under a prescribed load and boundary conditions of the design of the additive manufacturable cantilever beam; (Banga, pg. 3, “We aim to learn the solutions to the problem of minimum compliance topology optimization and limit our discussions to the solutions obtained using the Solid Isotropic Material with Penalization (SIMP) method. In SIMP, the objective is to find the material density distribution of physical densities x such that the strain energy is minimized under the prescribed support and loading conditions [under a prescribed load and boundary conditions of the design]… Minimize the compliance of a mechanical structure : - f(𝐱-) = 𝐮(𝐱-)K𝐊(𝐱-)𝐮(𝐱-) {𝐱 ∈ ℝ𝒏} PQ)QPQRS; minimizing the compliance of a structure is interpreted as maximizing the stiffness as compliance and stiffness have an inverse relationship (i.e. so as to find a structural topology that has the most stiffness)”, pg. 6, “Four typical real-world scenarios, as depicted in Figure 3 below, were used to sample the displacement boundary constraint cases: 1. Cantilever Beam [of the additive manufacturable cantilever beam;]”)). Olhofer, in view of Nguyen and Shivashankar, and Banga are both in the same field of endeavor (i.e. topology optimization). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Olhofer, in view of Nguyen and Shivashankar, and Banga to teach the above limitation(s). The motivation for doing so is using a CNN in topology optimization improves efficiency to find a compliant output design (cf. Banga, pg. 1, “In this work, we aim to complement traditional topology optimization with a data-driven approach as a way to accelerate the search for the optimized structure. Our approach rests on the theoretical ideal that with a sufficiently broad set of data that spans variations in the loads, boundary conditions, materials, objectives and design domains, a regressor with enough degrees of freedom can be trained to establish a mapping from the input to the optimized structures. Clearly, the space of variations that must be covered in this way is infinitely large. Nonetheless, in this work, we aim to establish the foundation for such an approach with simplifications that enable various important data sources to be parametrically studied. To this end, we use a 3D convolutional neural network (CNN) that can take as input intermediate solutions to the material distribution and predict the final structure.”). Regarding claim 2 and analogous claim 8, Olhofer in view of Nguyen, Shivashankar, and Banga teaches the system of claim 1. Nguyen further teaches: the modules further comprising: a fine-scale mapping module configured to define the fine-scale mesh using hexahedral elements to represent an objective topology of the design; (Nguyen, pg. 529 col. 1-2 and Figure 3c, “the MTOP approach can also be applied to other element types…For the 3D case, Fig. 3c shows 125 density elements per B8 element (B8/n125) [the modules further comprising: a fine-scale mapping module configured to define the fine-scale mesh using hexahedral elements to represent an objective topology of the design;]”). and a course-scale mapping module configured to define the course-scale mesh of the hexahedral elements, wherein the fine-scale mesh is completely embedded in the course-scale mesh. (Nguyen, pg. 528 col. 2 and Figure 2, “To obtain high resolution design, we employ a finer density mesh than the displacement mesh so that each displacement element consists of a number of density elements (sub-elements) [and a course-scale mapping module configured to define the course-scale mesh of the hexahedral elements,]. Within each density element, the material density is assumed to be uniform.; Figure 4 shows that the fine-scale design variable mesh is embedded in the coarser displacement mesh [and a course-scale mapping module configured to define the course-scale mesh of the hexahedral elements,]”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Nguyen with the teachings of Olhofer, Shivashankar, and Banga for the same reasons disclosed in claim 1. Regarding claim 3 and analogous claim 9, Olhofer in view of Nguyen, Shivashankar, and Banga teaches the system of claim 2. Olhofer further teaches: wherein design variables on the fine-scale mesh are updated every optimization step (Olhofer, ⁋72-73, “The inner process is started in step S201 by starting a topology optimization on an initial design, e.g. an initial design of a product (S202). In particular, a structural analysis is performed using in particular a finite element analysis is performed in step S203. The inner process stops (S204) when a stop criterion is met, for example when a maximum number of iterations is reached; this is interpreted as the design variables updated every optimization step as the topology optimization process runs until a stop criteria [wherein design variables on the fine-scale mesh are updated every optimization step] or the amount of redistributed material is less than a specified threshold.”). and state variables are computed on the fine-scale mesh only when collecting history data for training the machine learning-based model. (Olhofer, ⁋31, “In this invention local information may refer to (but not restricted to) displacements, strain, stress, energy, heat, flow, pressure or similar variables depending on the physics of the problem. [state variables]”, ⁋35, “In an automatic computational process functional mapping from local information to an update signal is generated, which can be used in a topology optimization for a beforehand specified quality functions and constraints. In the following, a functional mapping from local information to the update signal is referred to as ‘update strategy’…After generation of the update strategy it may be reused for the topology optimization of other structures; reusing update strategies for other structures is interpreted as computing state variables when collecting history data as reusing implies that new data has arrived [and state variables are computed on the fine-scale mesh only when collecting history data for training the machine learning-based model.] which are to be optimized subject to the same or similar quality function and constraints specified before the optimization, but different boundary conditions.”). Regarding claim 5 and analogous claim 11, Olhofer in view of Nguyen, Shivashankar, and Banga teaches the system of claim 1. Olhofer further teaches wherein the state variables include at least one of: displacement of coarse-scale mesh elements, strain on coarse-scale mesh elements, and stress on coarse-scale mesh elements. (Olhofer, ⁋30, “In the example of FIG. 1, when a two-dimensional design space is used, the element is a cell of the design space, i.e. a field of the mesh. In this case the sensitivity depends on local information computed by a physics simulation. FIG. 4 shows an example of local information related to a two-dimensional cell of the design space (cf. FIGS. 1 and 2). In this case the local information consists of the displacements [wherein the state variables include at least one of: displacement of coarse-scale mesh elements,] u1 to u8 nodes defining the finite element and the design variable xi, where i is the index of the element, i.e. referring to the elemental position in the design space.”, ⁋31, “In this invention local information may refer to (but not restricted to) displacements, strain, stress [strain on coarse-scale mesh elements, and stress on coarse-scale mesh elements.], energy, heat, flow, pressure or similar variables depending on the physics of the problem.”). Regarding claim 6 and analogous claim 12, Olhofer in view of Nguyen, Shivashankar, and Banga teaches the system of claim 1. Nguyen further teaches wherein the state variables are computed using strain vectors at all integration Gauss points of each coarse-scale mesh element. (Nguyen, pg. 527 col. 1-2, “The stiffness matrix of each element in (5) is computed by integrating the stiffness integrand contribution over the displacement element domain; stiffness is interpreted as strain vectors as stiffness calculates resilience to deformity, or strain [wherein the state variables are computed using strain vectors]. Numerical quadrature, such as Gaussian quadrature, is commonly reduced to the evaluation and summation of the stiffness integrand at specific Gauss points [at all integration Gauss points of each coarse-scale mesh element.]”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Nguyen with the teachings of Olhofer, Shivashankar, and Banga for the same reasons disclosed in claim 1. Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Olhofer, et al., US Pre-Grant Publication 2014/0214370A1 (“Olhofer”) in view of Nguyen, et al., Non-Patent Literature “A computational paradigm for multiresolution topology optimization (MTOP)” (“Nguyen”) and further in view of Shivashankar, et al., US Pre-Grant Publication 2015/0278706A1 (“Shivashankar”), Banga, et al., Non-Patent Literature “3D Topology Optimization Using Convolutional Neural Networks” (“Banga”), and Bourdain, Non-Patent Literature “Filters in topology optimization” (“Bourdain”). Regarding claim 4 and analogous claim 10, Olhofer in view of Nguyen, Shivashankar, and Banga teaches the system of claim 1. While Olhofer in view of Nguyen, Shivashankar, and Banga teaches using machine learning to perform topology optimization, the combination does not explicitly teach wherein the topology optimization module is further configured to filter the design variables using a filter matrix (P) for smoothing the distribution. Bourdain teaches wherein the topology optimization module is further configured to filter the design variables using a filter matrix (P) for smoothing the distribution. (Bourdain, pg. 2145, “In the standard framework of material distribution methods for topology design, we work, throughout this paper, in a fixed domain Ω⊂ R2 , and the optimal design is generated referring to this ‘ground-structure’…The filtering operation is achieved by mean of the convolution product of the filter [wherein the topology optimization module is further configured to filter the design variables using a filter matrix (P)] and the density (F ∗ p)(x) =∫R2F(x − y)p(y)dy Loosely speaking, we replace at each point the density field by a weighted average of its values. One consequence of this operation is that the filtered density is then a smooth and differentiable function, among other properties [see e.g. Reference 18, IV 6, p. 66]. Remark that this definition requires to extend the density field p to the whole space R2. [for smoothing the distribution.]”). Olhofer, in view of Nguyen, Shivashankar, and Banga, and Bourdain are both in the same field of endeavor (i.e. topology optimization). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Olhofer, in view of Nguyen, Shivashankar, and Banga, and Bourdain to teach the above limitation(s). The motivation for doing so is that filtering ensures regularization of the variables (cf. Bourdain, pg. 2144, “The use of filters in numerical methods in order to ensure regularity or existence of solutions to a problem has been used for many years in various domains of applications. The basic idea is to replace a (possibly) non-regular function by its regularization obtained by the convolution with a smooth function.”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Czinger, et al., US20170343984A1 discloses designing additive manufacturing parts for vehicles using a design module that generates designs based on required design parameters. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S.W./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jul 12, 2021
Application Filed
Oct 17, 2024
Non-Final Rejection — §103
Jan 29, 2025
Response Filed
Apr 29, 2025
Final Rejection — §103
Aug 11, 2025
Request for Continued Examination
Aug 20, 2025
Response after Non-Final Action
Oct 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488244
APPARATUS AND METHOD FOR DATA GENERATION FOR USER ENGAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12423576
METHOD AND APPARATUS FOR UPDATING PARAMETER OF MULTI-TASK MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Sep 23, 2025
Patent 12361280
METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING ROUTINE FOR CONTROLLING A TECHNICAL SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12354017
ALIGNING KNOWLEDGE GRAPHS USING SUBGRAPH TYPING
2y 5m to grant Granted Jul 08, 2025
Patent 12333425
HYBRID GRAPH NEURAL NETWORK
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
90%
With Interview (+43.1%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month