Prosecution Insights
Last updated: April 19, 2026
Application No. 18/459,803

PREDICTING LOCAL LAYOUT EFFECTS USING A VARIATIONAL AUTOENCODER WITH INTEGRATED REGRESSION AND CLASSIFICATION NETWORK

Non-Final OA §101
Filed
Sep 01, 2023
Examiner
LE, HUNG D
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
97%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
969 granted / 1073 resolved
+35.3% vs TC avg
Moderate +6% lift
Without
With
+6.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
1106
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1073 resolved cases

Office Action

§101
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION 1. This Office Action is in response to the application filed on 09/01/2023 . Claims 1- 20 are pending. Information Disclosure Statement 2. The information disclosure statement (IDS) filed on 0 2/09/2026, 10/13/2025, 01/07/2025 and 09/01/2023 compl y with the provisions of M.P.E.P. 609. The examiner has considered it. Examiner’s Note 3 . A Variational Autoencoder (VAE) (According to Google): “ A Variational Autoencoder (VAE) is a generative deep learning model that learns to compress input data into a continuous latent space and then reconstructs it, allowing for the generation of new, similar data samples. Unlike standard autoencoders, VAEs use probabilistic mapping (mean and variance) to create a smooth, organized latent space, making them ideal for image generation, denoising, and representation learning . ” Integrated regression (or joint modeling) (According to Google): “ Integrated regression (or joint modeling) is a statistical approach that analyzes multiple related, often longitudinal, processes simultaneously, such as longitudinal markers and time-to-event data. It accounts for correlations between these processes, enhancing efficiency and addressing issues like insufficient sample sizes that arise when analyzing them separately, while requiring stronger modeling assumptions.” A classification network (According to Google): “ A classification network is a type of artificial neural network (ANN) specifically designed to categorize input data into predefined classes or categories based on patterns learned from training data. It is a supervised learning model, meaning it requires labeled data to learn the relationship between input features and target categories. ” Interpolation training (According to Google): “ Interpolation training in machine learning is the process of training a model to perfectly fit or "interpolate" all training data points, resulting in near-zero training loss. It focuses on creating a continuous function that connects known data points, allowing the model to accurately predict values within the range of its training data . ” An integrated circuit (According to Google): “ An integrated circuit (IC), or monolithic integrated circuit, is a set of electronic circuits—including millions or billions of tiny transistors, resistors, and capacitors—etched onto a single small chip of semiconductor material, usually silicon. ICs are faster, smaller, more energy-efficient, and cheaper than components made separately, forming the backbone of modern electronics from calculators to supercomputers . ” Local layout effect ( According to Google): “ Local layout effect (LLE), also known as layout-dependent effect (LDE), is a phenomenon in advanced semiconductor manufacturing (typically 7nm, 5nm, 3nm, and below) where the electrical behavior of a transistor or standard cell—specifically its threshold voltage (), mobility, and current—is influenced by its immediate physical surroundings and geometry. As features shrink, the proximity to neighboring cells, stress from shallow trench isolation (STI), and the length of diffusion areas create non-uniformities that alter device performance compared to ideal models, resulting in potential timing violations or increased leakage. ” Velculescu et al, US 20250131982, [ Velculescu : Paragraph s 69 and 74 (“Samples in the training set are used to identify the bins that are most differentially mutated between cancer and non-cancer samples. In the training set, sequence data from all cancer samples and all non-cancer samples are combined,”)] [ Velculescu : Paragraph 108 (“ An analysis can identify a variant inferred from sequence data to identify sequence variants based on probabilistic modeling, statistical modeling, mechanistic modeling, network modeling, or statistical inferences. Non-limiting examples of analysis methods include principal component analysis, autoencoders, singular value decomposition, Fourier bases, wavelets, discriminant analysis, regression, support vector machines, tree-based methods, networks, matrix factorization, and clustering. Non-limiting examples of variants include a germline variation or a somatic mutation. ”)] [ Velculescu : Paragraph 110 (“ performing sequencing analysis, measuring sets of values representative of classes of molecules, identifying sets of features and feature vectors from assay data, processing feature vectors using a machine learning model to obtain output classifications, and training a machine learning model (e.g., iteratively searching for optimal values of parameters of the machine learning model). ”)] [ Velculescu : Paragraphs 120-121 (“ The resulting training sets are provided to machine learning unit, such as a neural network or a support vector machine. Using the training set, the machine learning unit may generate a model to classify the sample according to the cfDNA mutation profiles, frequency of mutations and/or fragmentation profile ”)] [ Velculescu : Paragraph 128 (“ the computer processing of a machine learning technique can include logistic regression, multiple linear regression (MLR), dimension reduction, partial least squares (PLS) regression, principal component regression, autoencoders, variational autoencoders, singular value decomposition, Fourier bases, wavelets, discriminant analysis, support vector machine, decision tree, classification and regression trees (CART), tree-based methods, random forest, gradient boost tree, logistic regression, matrix factorization, multidimensional scaling (MDS), dimensionality reduction methods, t-distributed stochastic neighbor embedding (t-SNE), multilayer perceptron (MLP), network clustering, neuro-fuzzy, neural networks (shallow and deep), artificial neural networks, Pearson product-moment correlation coefficient, Spearman's rank correlation coefficient, Kendall tau rank correlation coefficient, or any combination thereof. ”)] . Chidlovskii et al, 11,373,096, [ Chidlovskii : Abstract (“training a variational autoencoder to minimize a reconstruction loss of the signal strength values of the training data, where the variational autoencoder includes encoder neural networks and decoder neural networks; and training a classification neural network to minimize a prediction loss on the labeled data, where the classification neural network generates a predicted location based on the latent variable, and where the encoder neural networks and the classification neural network form the predictor”)] [ Chidlovskii : Column 1, lines 59-67 through column 2, lines 1-8 (“ Semi-supervised machine learning includes training with labeled and unlabeled data and is used in machine learning (ML). Machine learning can be used to reduce offline Wi-Fi data collection. In semi-supervised machine learning, a system is provided with a training set {( x.sub.i ,y.sub.i ), i =1, . . . n} including tuples of data points x.sub.i and annotations, also called labels, y.sub.i , and is additionally provided with unlabeled data points x.sub.i . The system infers the best functional relationship x.fwdarw .y minimizing a prediction error. This problem can also be described as finding a mapping function F: x.fwdarw .z in a space of a latent variable z, which is smaller than the space of the training set. ”)] [ Chidlovskii : Column 2, lines 21-29 , and column 4, lines 13-23 (“ Variational autoencoders, VAEs, are a class of stochastic generative models implemented as deep neural networks. VAEs simultaneously train a probabilistic encoder neural network and a decoder neural network. VAEs involve drawing a sample of a latent variable z, from a uniform prior distribution p(z). The space of latent variables may be of a smaller dimension that the input and output of the VAEs. The sample z is put through the decoder neural network to reconstruct an original input ”)] . Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 15-2 0 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter because claim s 15 -20 directs to “computer program product ” in which, when using the broadest reasonable interpretation would include non-statutory subject matter. The instant specification fails to exclude this “computer program product” comprising signal. That is, language such as “physical”, “tangible” and “storage” do not make an otherwise non-statutory computer-readable medium claim statutory, since a data signal per se is considered a physical and tangible medium that temporarily stores data (it’s a transitory storage medium, but it’s still a storage medium). Allowable Subject Matter 6. Claims 1-14 are in conditions for allowance . 7 . Any inquiry concerning this communication or earlier communications from the examiner should be directed to [Hung D. Le], whose telephone number is [571-270-1404]. The examiner can normally be communicated on [Monday to Friday: 9:00 A.M. to 5:00 P.M.]. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on [571-272-4080]. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, contact [800-786-9199 (IN USA OR CANADA) or 571-272-1000]. Hung Le 03 / 18 /202 6 /HUNG D LE/ Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596684
SYSTEMS AND METHODS FOR SEARCHING DEDUPLICATED DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596724
SYSTEMS AND METHODS FOR USE IN REPLICATING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596736
SYSTEMS AND METHODS FOR USING PROMPT DISSECTION FOR LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12591489
POINT-IN-TIME DATA COPY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585625
SYSTEM AND METHOD FOR IMPLEMENTING A DATA QUALITY FRAMEWORK AND ENGINE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
97%
With Interview (+6.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 1073 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month