Prosecution Insights
Last updated: April 19, 2026
Application No. 18/008,351

System and Method for Processing Information Signals

Final Rejection §101§103
Filed
Dec 05, 2022
Examiner
HICKS, AUSTIN JAMES
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
308 granted / 403 resolved
+21.4% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
54 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 2/4/2026 have been fully considered but they are not persuasive. Applicant argues, Claim 18 recites "a shared unit of a neural network of the system, wherein the shared unit of the neural network is configured to perform a respective first signal processing step of the first information signal and the second information signal", and "at least two separate units of the neural network arranged in the signal flow downstream of the shared unit." The human mind is not capable of allocating mental processes between shared and separate units in the mind, particularly to implement a neural network.” Remarks 9. This applicant’s opinion and not a fact. It is well known that different parts/units of the brain process different information.1 Applicant argues, claims 19, 22 and 23 recite hardware-implemented units of a neural network. It is axiomatic that such units are not merely generic computer components because they are constructed of hardware that is configured to do a particular signal processing task, as opposed to general processing of software. Remarks 10. Applicant does not claim a specific type of hardware, or even a specific chip. There is no claim to a specific hardware, and a generic computer is capable of processing signals.2 Applicant argues, the specified mix of hardware and software implemented units provides distinct technical advantages. To this end, the specification notes that: The respective advantage, for example, of a software and hardware implementation can thereby be achieved simultaneously in the system. A hardware implementation can result in less flexibility in the signal processing, but can advantageously enable e.g. a higher processing speed and/or lower energy requirement in the processing. Conversely, a software implementation can enable greater flexibility, e.g. through reprogramming of the processing parameters (e.g. weightings of the neural network), even retrospectively, e.g. during a use of the system. The first separate unit can be used e.g. for predefined, unchanging functions, whereas the second separate unit can also be used if an adaptation to new functions is required. (Substitute Specification at p.4, line 25 to page 5, line 4). Remarks 10. The claimed advantage is most similar to “Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality:… Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016);” MPEP 2106.05(a). Because Applicant’s mental concept does not improve the functioning of a computer, and only offers an advantage to the mental concept when running on an generic computer, the abstract idea is not integrated into a practical application. Applicant argues, While Chennupati does appear to disclose a shared encoder in Fig. 1, the encoder is only shared between two sensor signals from the same camera. The shared encoder then feeds all of these signals to further units for combined processing. Chennupati does not teach separate units for separately processing the first processing signal and the second processing signal. Remarks 13. Applicant claims a first information signal, and a second information signal in claim 18. Applicant doesn’t claim separate sensors in claim 18. Even if Applicant did claim multiple sensors, Urtasun teaches “multi-sensor fusion…” Non-Final 12 citing Urtasun para 35. Applicant argues, “as is evident in Fig. 1 set forth above, the decoders of Chennupati all process information from both information signals, not the individual signals as claimed.” Remarks 14. Applicant does not claim that its first unit and second unit only process the first signal and the second signal, respectively. Therefore when Chennupati teaches separate decoders that process all the incoming signals, as applicant admits, then Chennupati teaches the broadest reasonable interpretation of Applicant’s claims. Applicant argues, The model training method 750 of Urtasun does not purport to be able to function as a decoder for a task in a CNN having a shared encoder and various decoders, as taught by Chennupati. Moreover, no reasons is given to modify Chennupati to incorporate a model training method 750 of Urtasun into one of the decoders. Thus, no inference exists that it would have been obvious or beneficial to modify Chennupati with the teachings of Urtasun to arrive at the invention of claim 19. Remarks 15-16 Chennupati decoders have to be trained. Urtasun give a motivation for why you would train a decoder using its training method – to “identify an appropriate motion path through such surrounding environment.” Urtasun para 3. Urtasun uses a convolutional neural network, see fig. 2, so Urtasun uses decoders as well. The references are analogous and the motivation is to apply these CNN models to an application like identifying a motion path for vehicles. Applicant argues, “Claim 23 recites that the shared unit of claim 18 is hardware-implemented. Similar to the discussion provided above in connection with claim 19, neither Chennupati nor Urtasun provide any reason to implement a shared unit of a CNN such as the CNN of Chennupati as a hardware-implemented unit.” Remarks 16. The advantages of implementing neural networks in hardware are “known”. Applicant’s spec. 45. Applicant’s specification paragraph 5 admits this, “CNN hardware accelerators, for example, are known which, e.g. in contrast to software-based CNN systems, can have positive effects on processing speed.” Further, Urtasun paragraph 3 supplies another motivation to implement a CNN in hardware in order to “identify an appropriate motion path through such surrounding environment.” Urtasun para 3. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 18-31, 36 and 38-41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental-concept abstract idea without significantly more. The claims recite signal processing using a neural network. This judicial exception is not integrated into a practical application because it does not improve a computer or technological field. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because data gathering and outputting processed information are insignificant extra-solution activity. The hardware-implemented unit of claims 19, 22 and 23 are a generic computer component that does not amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 18-31, 36, 38 and 40-41 are rejected under 35 U.S.C. 103 as being unpatentable over MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning by Chennupati et al (Chen) and US20200160559A1 by Urtasun et al. Claim 39 is rejected under 35 U.S.C. 103 as being unpatentable over MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning by Chennupati et al (Chen), US20200160559A1 by Urtasun et al and EIE: Efficient Inference Engine on Compressed Deep Neural Network by Han et al. Chen teaches claim 18. A (Chen fig. 1, see below.) PNG media_image1.png 258 356 media_image1.png Greyscale a signal input configured to receive at least the first information signal and the second information signal; (Chen fig. 1 frame t and t-1) a shared unit of a neural network of the system, wherein the shared unit of the neural network is configured to perform a respective first signal processing step of the first information signal and the second information signal; (Chen fig. 1 aggregation layer, sec. 3.1 “Feature aggregation layers that concatenate the encoded feature vectors from multiple streams…” The aggregation layers are part of a convolutional neural network, Chen sec. 2.2 “Different outputs from initial or mid-level convolution layers from CNNs (referred to as extracted features) are forwarded to the next stage of processing using feature aggregation. Feature aggregation is a meaningful way to combine these extracted features. These features can be extracted from different CNNs operating on different input data [62, 37] or from a CNN operating on different resolutions of input [24].”) at least two separate units of the neural network arranged in the signal flow downstream of the shared unit, wherein a first unit of the at least two separate units is configured to perform a second signal processing step of the first information signal, and wherein a second unit of the at least two separate units is configured to perform a second signal processing step of the second information signal, to generate processed information signals; and (Chen fig. 1 decoder for task 1 and decoder for task 2. Chen sec. 3.1 “feature maps extracted from different streams of inputs are concatenated and sent to task-specific decoders as shown in Figure 1.”) a signal output configured to output the processed information signals. (Chen fig. 2 segmentation, depth and motion are the output processed information signals, see below.) PNG media_image2.png 316 664 media_image2.png Greyscale Chen doesn’t teach a hardware system for sensor fusion. However, Urtasun teaches a system. (Urtasun fig. 1) Urtasun, the claims and Chen are all directed to object detection. It would have been obvious to implement Chen in a hardware system of some sort because the CNN taught in Chen is used on images collected from a vehicle and Urtasun teaches how to collect those images in realtime which allows an autonomous vehicle to “identify an appropriate motion path through such surrounding environment.” Urtasun para 3. Chen teaches claim 19. The system as claimed in claim 18, wherein the at least two separate units of the neural network comprise at least one software-implemented unit of the neural network and at least one (Chen fig. 1 decoder for task 1 and decoder for task 2. Chen sec. 3.1 “feature maps extracted from different streams of inputs are concatenated and sent to task-specific decoders as shown in Figure 1.”) Chen doesn’t teach the hardware. However, Urtasun teaches a hardware-implemented unit of the neural network. (Urtasun para 115 “one or more portion(s) of the model training method 750 can be implemented as an algorithm on the hardware components of the device(s) described herein to, for example, determine object intention and associated motion planning for an autonomous vehicle.”) 20. The system as claimed in claim 20, wherein the at least two separate units of the neural network are disposed in parallel in the signal flow. (Chen fig. 1’s decoders are in parallel, see below.) PNG media_image1.png 258 356 media_image1.png Greyscale Chen teaches claim 21. The system as claimed in claim 18, wherein the at least two separate units of the neural network are disposed in parallel in the signal flow. (Chen fig. 1’s decoders are in parallel, see below.) PNG media_image1.png 258 356 media_image1.png Greyscale Chen teaches claim 22. The system as claimed in claim 18, wherein the shared unit of the neural network is (Chen fig. 1 decoder for task 1 and decoder for task 2. Chen sec. 3.1 “feature maps extracted from different streams of inputs are concatenated and sent to task-specific decoders as shown in Figure 1.”) Chen doesn’t teach the hardware. However, Urtasun teaches a hardware-implemented unit of the neural network. (Urtasun para 115 “one or more portion(s) of the model training method 750 can be implemented as an algorithm on the hardware components of the device(s) described herein to, for example, determine object intention and associated motion planning for an autonomous vehicle.”) Chen teaches claim 23. The system as claimed in claim 22, wherein the at least two separate units of the neural network comprise at least one software-implemented unit of the neural network and at least one (Chen fig. 1 decoder for task 1 and decoder for task 2. Chen sec. 3.1 “feature maps extracted from different streams of inputs are concatenated and sent to task-specific decoders as shown in Figure 1.”) Chen doesn’t teach the hardware. However, Urtasun teaches a hardware-implemented unit of the neural network. (Urtasun para 115 “one or more portion(s) of the model training method 750 can be implemented as an algorithm on the hardware components of the device(s) described herein to, for example, determine object intention and associated motion planning for an autonomous vehicle.”) Chen teaches claim 24. The system as claimed in claim 23, wherein the at least two separate units of the neural network are disposed in parallel in the signal flow. (Chen fig. 1’s decoders are in parallel, see below.) PNG media_image1.png 258 356 media_image1.png Greyscale Chen teaches claim 25. The system as claimed in claim 18, wherein the shared unit of the neural network comprises a convolutional neural network. (The shared unit is the feature aggregation layer in Chen fig. 1. Chen sec. 2.2 “Different outputs from initial or mid-level convolution layers from CNNs (referred to as extracted features) are forwarded to the next stage of processing using feature aggregation. Feature aggregation is a meaningful way to combine these extracted features. These features can be extracted from different CNNs operating on different input data [62, 37] or from a CNN operating on different resolutions of input [24].”) Chen teaches claim 26. The system as claimed in claim 18, wherein the shared unit of the neural network comprises an autoencoder. (Chen fig. 2 “Figure 2: Illustration of the MultiNet++ network operating on consecutive frames of input video sequence. Consecutive frames are processed by a shared siamese-style encoder…”) Chen teaches claim 27. The system as claimed in claim 18, the system further comprising: a preprocessing unit; wherein the preprocessing unit is arranged in the signal flow between the signal input and the shared unit of the neural network, (Chen fig. 1 feature extractor/encoder is the preprocessor.) wherein the preprocessing unit is configured to allocate a respective processing time to the first information signal and the second information signal for the first signal processing step by the shared unit of the neural network. (Chen fig. 1 frame t and frame t-1 are two adjacent-in-time video frames.) Chen teaches claim 28. The system as claimed in claim 27, wherein the preprocessing unit is configured to convert the first information signal and the second information signal to a signal standard which is adapted to the shared unit of the neural network. (Chen fig. 1 feature maps are the signal standard.) Urtasun teaches claim 29. The system as claimed in claim 28, wherein the first information signal and the second information signal are image signals and the preprocessing unit is configured to convert the image signals into the respective standard image signals with a predetermined frame rate and/or a predetermined image resolution. (Urtasun para 35 “multi-sensor fusion can be implemented via point-wise feature fusion (e.g., at different levels of resolution) and/or ROI-wise feature fusion. Because LIDAR point cloud data can sometimes be sparse and continuous, while cameras capture dense features at discrete states, fusing such sensor data is non-trivial. The disclosed deep fusion offers a technical solution that reduces resolution loss relative to the original sensor data streams.” Urtasun para 87 “given the projected (into the image plane) sparse depth 210 from the LIDAR point cloud 206 and a camera image 208, the models 204 and 214 cooperate to output dense depth 222 at the same resolution as the input image 208.”) Chen teaches claim 30. The system as claimed in claim 29, wherein the preprocessing unit is further configured to make only a predetermined selection of information from the at least a first image signal of the image signals for the signal processing by the shared unit based on a function for which the first image signal is provided. (Examiner interprets this to mean grabbing features for later processing based on the type of later processing. Chen fig. 1 feature map is a selection from the first image frame ‘t’ based on the frame being an image for “motion”, “depth” or “segmentation analysis”, see Chen fig. 2.) Chen teaches claim 31. The system as claimed in claim 27, wherein the preprocessing unit is configured to select whether the first unit or the second unit of the neural network is used for the second signal processing step depending on the first information signal or the second information signal present at signal input. (Examiner interprets this to mean processing the signals if the signals are present. Chen fig. 1 teaches this.) 36. (Current amended, withdrawn) A motor vehicle comprising: a system as claimed in claim 18, which is arranged in a control unit of the motor vehicle; and at least two sensors which are connected to the signal input of the system using an on-board power supply of the motor vehicle. (Urtasun fig. 1.) 38. (New) The system as claimed in claim 18, wherein the shared unit of the neural network is a non-trainable signal processor configured to implement at least one layer of the neural network. 39. (New) The system as claimed in claim 38, wherein the at least two separate units of the neural network comprise at least one software-implemented unit of the neural network and at least one (Chen fig. 1 two separate decoders are the claimed separate units.) Chen doesn’t teach a non-trainable signal processor. However, Han teaches a non-trainable signal processor. (Han abs “energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing.” The input image is shown in fig. 1. Han fig. 4 shows the architecture and there is only a forward pass. That means no backwards pass, i.e. no training. The hardware is non-trainable and processes an image signal.) Chen, Han and the claims are all directed to signal processing with neural networks. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use a non-trainable signal processor from Han for better “throughput, energy efficiency and area efficiency.” Han abs. 40. (New) The system as claimed in claim 18, further comprising wherein the first information signal and the second information signal are voice signals or image signals, and wherein the system generates image recognition information or voice recognition information. (Urtasun abs “ulti-sensor fusion (e.g., fusing features derived from image data, light detection and ranging (LIDAR) data, and/or other sensor modalities) at both the point-wise and region of interest (ROI)-wise level, resulting in fully fused feature representations.” Applicant claims voice OR image, so Urtasun only has to teach one of the signals to teach the claim.) 41. (New) The system as claimed in claim 18, wherein: the first unit is configured to perform the second signal processing step of the first information signal independent of the second information signal, and wherein the second unit is configured to perform the second signal processing step of the second information signal independent of the first information signal. (Independent here include any setup where the two units don’t use each other (not connected) in order to process the signal. That is exactly the arrangement of the decoders in Chen fig. 1. Decoder for task 1 is not connected (is independent) to the decoder for task 2.) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUSTIN HICKS/Primary Examiner, Art Unit 2142 1 https://www.coursehero.com/study-guides/wmopen-psychology/outcome-parts-of-the-brain/ “The frontal lobe is involved in reasoning, motor control, emotion, and language. It contains the motor cortex, which is involved in planning and coordinating movement; the prefrontal cortex, which is responsible for higher-level cognitive functioning; and Broca’s area, which is essential for language production.” 2 Urtasun para 47 “instructions that when executed by the one or more processors of the one or more remote computing devise 106 cause the one or more processors to perform operations and/or functions including operations and/or functions associated with the vehicle 102 including exchanging (e.g., sending and/or receiving) data or signals with the vehicle 102, monitoring the state of the vehicle 102, and/or controlling the vehicle 102.”
Read full office action

Prosecution Timeline

Dec 05, 2022
Application Filed
Oct 31, 2025
Non-Final Rejection — §101, §103
Feb 04, 2026
Response Filed
Feb 26, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591767
NEURAL NETWORK ACCELERATION CIRCUIT AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12554795
REDUCING CLASS IMBALANCE IN MACHINE-LEARNING TRAINING DATASET
2y 5m to grant Granted Feb 17, 2026
Patent 12530630
Hierarchical Gradient Averaging For Enforcing Subject Level Privacy
2y 5m to grant Granted Jan 20, 2026
Patent 12524694
OPTIMIZING ROUTE MODIFICATION USING QUANTUM GENERATED ROUTE REPOSITORY
2y 5m to grant Granted Jan 13, 2026
Patent 12524646
VARIABLE CURVATURE BENDING ARC CONTROL METHOD FOR ROLL BENDING MACHINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month