Prosecution Insights
Last updated: April 19, 2026
Application No. 18/255,142

PROCESS FOR TRAINING A FIRST ARTIFICIAL NEURAL NETWORK STRUCTURE, COMPUTER SYSTEM, COMPUTER PROGRAM AND COMPUTER-READABLE MEDIUM

Non-Final OA §101§103
Filed
May 31, 2023
Examiner
MRABI, HASSAN
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
285 granted / 363 resolved
+23.5% vs TC avg
Strong +32% interview lift
Without
With
+32.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 363 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is sent in response to Application’s Communication received on 05/31/2023 for application number 18/255142. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawing, Abstract, Oath/Declaration, and Claims. Claims (1-11), 12 and 14 are presented for examination. Claim 13 is cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 12 is rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. Regarding claim 12, the claims refer to a system. Paragraphs [0008] and [0025] of this instant published specification, have provided evidence that the claimed system is software per se, wherein the system is adapted for implementing a process without providing or describing the structure of the system. The claim does not define structural and functional descriptive material used in interrelationship between the computer system and the hardware like a memory or processor. Descriptive material can be characterized as either “functional descriptive material” or “nonfunctional descriptive material.” Both types of “descriptive material” are nonstatutory when claimed as descriptive material per se, 33 F.3d at 1360, 31 USPQ2d at 1759. When functional descriptive material is recorded on some computer-readable medium, it becomes structurally and functional interrelated to the medium and will be statutory in most cases since use of technology permits the function of the descriptive material to be realized. Compare In re Lowry, 32 F.3d 1579, 1583-84, 32 USPQ2d 1031, 1035 (Fed. Cir. 1994). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 and 14 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Li et al. US Patent Application Publication US 20220084204 A1 (hereinafter Li) in view of Liu Ming-Yu et al. Foreign Patent Application Publication CN 109196526 A (hereinafter Liu). Regarding claim 1, Li teaches A process for improving a first artificial neural network structure (1), the process comprising ([0120] wherein Li describes improving a neural network and determines the accuracy of generator network of GAN) classifying data samples are classified in different classes (4) by the first artificial neural network structure (1), whereby at least some of the classes (4) are unsupervised classes (6), which are generated and/or filled by unsupervised learning (FIGS. 3A-3B, 4, 6, [0067], [0069], [0089], [0092] wherein Li describes a neural network for classifying data samples in unsupervised manner by generating labels and notations for images, wherein labels indicates classification to each pixel within the image) wherein the generated artificial candidates (7) are labelled and/or annotated in a supervised learning for labelling and/or annotating the said unsupervised class (7) ([0078], [0103], [0489] wherein Li incorporates artificial network to generate labels and annotation classes for images). ([0076] wherein Li incorporates GAN that is a class of artificial intelligence system that uses two types of artificial neural networks contesting with each other. A GAN includes a first type of artificial neural networks, referred to as a generator network, that generates candidates and a second type of artificial neural networks, referred to as a discriminator network, that evaluates generated candidates. A generator network learns to map from a latent space to a particular data distribution of interest (a data distribution of changes to input images that are indistinguishable from photographs to human eyes), while a discriminator network discriminates between instances from a training dataset and candidates produced by generator network. In at least one embodiment, a GAN can have a generator network and two discriminator networks. A first discriminator network evaluates synthetic images generated by generator network, and a second discriminator network evaluates synthetic images and corresponding labels generated by generator network). Li does not teach and training a second artificial neural network structure (2) to generate artificial candidates (7) belonging to at least one of the unsupervised classes (6). However in analogous art of training artificial neural network structure, Liu teaches training a second artificial neural network structure (2) to generate artificial candidates (7) belonging to at least one of the unsupervised classes (6) (Claim 1 text, Abstract, page. 2, paragraph 6, page. 3 paragraph 3, page. 10, paragraphs 4-6 wherein Liu teaches training a first and second neural network for generating multi-mode digital images in supervised manner). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Li with Liu by incorporating the method of training a second artificial neural network structure (2) to generate artificial candidates (7) belonging to at least one of the unsupervised classes (6) of Liu into the method of classifying data samples are classified in different classes (4) by the first artificial neural network structure (1), whereby at least some of the classes (4) are unsupervised classes (6), which are generated and/or filled by unsupervised learning of Li for the purpose of generating advanced features of the digital images, and other layers in the first neural network and the second neural network. Wherein the advanced features can be the type and configuration description of the object in the image, and the low-level characteristic can be based on the type of the object and the configuration determined object edge (Liu: page. 3, paragraph 1). Regarding claim 2, Li as modified by Liu teaches wherein the process is a process for image classification whereby the data samples are images especially taken by at least one surveillance camera ([0143-0148], [0168] wherein Li’s data samples are images captured by a camera). Regarding claim 3, Li as modified by Liu teaches wherein the first artificial network structure (1) is trained with the labelled and/or annotated artificial candidates (7) in order to label and/or annotate the said supervised class (6) (FIGS. 3A-3B, 4, 6, [0056], [0067], [0070], [0078], [0103], [0107] wherein Li describes training neural network with labelled updated images). Regarding claim 4, Li as modified by Liu teaches wherein the first artificial neural network structure (1) is a convolutional artificial neural network and/or that the data samples are images ([0006-0008], [0067], [0072-0074] wherein Li’s data samples are images). Regarding claim 5, Li as modified by Liu teaches wherein a part of the classes (4) are supervised classes (5), which are generated and/or filled by supervised learning (FIGS. 3A-3B, 4, 6, [0056], [0067], [0070], [0078], [0103], [0107] wherein Li describes generating labels by supervised learning) Regarding claim 6, Li as modified by Liu teaches wherein the second artificial neural network structure (2) is trained by improving a loss-function of the probability density function of the respective unsupervised class (6) (page. 11, paragraphs 5-9, wherein Liu describes measuring the loss function between images and maximizing probability density). Regarding claim 7, Li as modified by Liu teaches wherein the second artificial neural network structure (2) comprises a generative artificial neural network ([0006], [0067], [0072-0073], [0078] wherein Li describes generating training dataset using a generative adversarial network (GAN) that generates synthetic images and an associated trained neural network that generates labels for synthetic images generated by the GAN). Regarding claim 8, Li as modified by Liu teaches wherein the second artificial neural network structure (2) comprises a discriminative artificial neural network, whereby the generative and the discriminative artificial neural network form a generative adversarial network (GAN) ([0076] wherein Li incorporates GAN that is a class of artificial intelligence system that uses two types of artificial neural networks contesting with each other. A GAN includes a first type of artificial neural networks, referred to as a generator network, that generates candidates and a second type of artificial neural networks, referred to as a discriminator network, that evaluates generated candidates. A generator network learns to map from a latent space to a particular data distribution of interest (a data distribution of changes to input images that are indistinguishable from photographs to human eyes), while a discriminator network discriminates between instances from a training dataset and candidates produced by generator network. In at least one embodiment, a GAN can have a generator network and two discriminator networks. A first discriminator network evaluates synthetic images generated by generator network, and a second discriminator network evaluates synthetic images and corresponding labels generated by generator network). Regarding claim 9, Li as modified by Liu teaches wherein the first artificial neural network structure (1) is realized as a discriminative artificial neural network, whereby the generative and the discriminative artificial neural network form a generative adversarial network (GAN) ([0076] wherein Li incorporates GAN that is a class of artificial intelligence system that uses two types of artificial neural networks contesting with each other. A GAN includes a first type of artificial neural networks, referred to as a generator network, that generates candidates and a second type of artificial neural networks, referred to as a discriminator network, that evaluates generated candidates. A generator network learns to map from a latent space to a particular data distribution of interest (a data distribution of changes to input images that are indistinguishable from photographs to human eyes), while a discriminator network discriminates between instances from a training dataset and candidates produced by generator network. In at least one embodiment, a GAN can have a generator network and two discriminator networks. A first discriminator network evaluates synthetic images generated by generator network, and a second discriminator network evaluates synthetic images and corresponding labels generated by generator network). Regarding claim 10, Li as modified by Liu teaches wherein the generative artificial neural network is a variational autoencoder (VAEs) ([0073] wherein Li teaches a generative model other than a GAN is used to generate a synthetic version of an input image and to generate one or more labels of objects in synthetic version. In at least one embodiment, a generative network that is used is a normalizing flow. In at least one embodiment, a generative model that is used is a latent dirichlet allocation, a naive Bayes network, a Gaussian mixture model, a restricted Boltzmann machine, or a variational autoencoder. In at least one embodiment, a generative network that is used is a Style Generative Adversarial Network (StyleGAN). StyleGAN is an extension to a GAN architecture to give control over disentangled style properties of generated images). Regarding claim 11, Li as modified by Liu teaches wherein, characterized that in the unsupervised class (6) only the artificial candidates (7) are labelled and/or annotated ([0078], [0103], [0489] wherein Li incorporates artificial network to generate labels and annotation classes for images). ([0076] wherein Li incorporates GAN that is a class of artificial intelligence system that uses two types of artificial neural networks contesting with each other. A GAN includes a first type of artificial neural networks, referred to as a generator network, that generates candidates and a second type of artificial neural networks, referred to as a discriminator network, that evaluates generated candidates. A generator network learns to map from a latent space to a particular data distribution of interest (a data distribution of changes to input images that are indistinguishable from photographs to human eyes), while a discriminator network discriminates between instances from a training dataset and candidates produced by generator network. In at least one embodiment, a GAN can have a generator network and two discriminator networks. A first discriminator network evaluates synthetic images generated by generator network, and a second discriminator network evaluates synthetic images and corresponding labels generated by generator network). Regarding claim 12, Li teaches a computer system (Abstract). Claim 12 is similar in scope to claim 1, therefore the claim is rejected under similar rationale. Regarding claim 14, Li teaches A non-transitory, computer-readable medium having stored thereon instructions that when executed by a computer cause the computer ([0570]). Claim 14 is similar in scope to claim 1, therefore the claim is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN MRABI whose telephone number is (571)272-8875. The examiner can normally be reached on Monday-Friday, 7:30am-5pm. Alt, Friday, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASSAN MRABI/Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579411
RESONATOR NETWORK BASED NEURAL NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12579710
Transforming Content Across Visual Mediums Using Artificial Intelligence and User Generated Media
2y 5m to grant Granted Mar 17, 2026
Patent 12554924
Computer-Implemented Methods and Systems for Generative Text Painting
2y 5m to grant Granted Feb 17, 2026
Patent 12547905
PROBABILISTIC ENTITY-CENTRIC KNOWLEDGE GRAPH COMPLETION
2y 5m to grant Granted Feb 10, 2026
Patent 12536782
METHOD AND APPARATUS FOR TRAINING CLASSIFICATION TASK MODEL, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+32.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 363 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month