Prosecution Insights
Last updated: April 19, 2026
Application No. 18/171,080

Computer-Implemented Method and System for Predicting Future Developments of a Traffic Scene

Non-Final OA §103§112
Filed
Feb 17, 2023
Examiner
JAYAKUMAR, CHAITANYA R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
26%
Grant Probability
At Risk
1-2
OA Rounds
4y 6m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
13 granted / 51 resolved
-29.5% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
18 currently pending
Career history
69
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 51 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the submission filed 17 February 2023 for application 18/171,080. Currently claims 1-11 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Failure to provide a certified translation may result in no benefit being accorded for the non-English application. Information Disclosure Statement An information disclosure statement (IDS) was submitted on 29 March 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: There is an extra space after the periods in the Abstract. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a perception plane configured to”, “a pre-trained encoder network configured to”, “a sampler configured to”, and “a pre-trained decoder network configured to” in claim 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “… raster-like manner …” in claims 1 and 10 (last line) is a relative term which renders the claim indefinite. The term “raster-like manner” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim 10 limitations “a perception plane configured to”, “a pre-trained encoder network configured to”, “a sampler configured to”, and “a pre-trained decoder network configured to”, invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure is devoid of any structure that performs the function in the claim. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 10 and 11 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The disclosure is devoid of any structure that performs the function in Claim 10 . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5 and 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Ditzel et al (GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies, 2021) in view of Shamsolmoali et al (Road Segmentation for Remote Sensing Images using Adversarial Spatial Pyramid Networks, 2020). Regarding claim 1: Ditzel teaches: A computer-implemented method for predicting future developments of a traffic scene, comprising ([Abstract] Autonomous systems require a continuous and dependable environment perception for navigation and decision-making. This work combines the complementary strengths of both sensor types in a unique self-learning fusion approach for a probabilistic scene reconstruction in adverse surrounding conditions. Then, at inference time, relying exclusively on radio frequencies, the model successively predicts camera constituents in an autoregressive and self-contained process. Page 148998, Column 1, Section II. DATA COLLECTION AND SENSOR SETTINGS] The presented experiments were conducted on a custom dataset comprising roughly 50 000 samples of time synchronized radar and camera images. The collection captures diverse real-world scenery around Ulm Germany, varying in terms of both weather and lighting conditions. It features all kinds of realistic traffic scenarios ranging from clusters of pedestrians over lost-cargo situations and oncoming vehicles to the passing of trams and buses.): aggregating scene-specific information about a traffic scene ([Page 148998, Column 1, Section II] It features all kinds of realistic traffic scenarios ranging from clusters of pedestrians over lost-cargo situations and oncoming vehicles to the passing of trams and buses); using a pre-trained encoder network to transform the aggregated scene-specific information into parameters of a multivariate probability distribution of latent features ([Page 149006, Column 1, Section 3] IMPLEMENTATION DETAILS OF THE CATEGORICAL AUTOENCODERS Even though a wide range of autoencoder architectures exist both in theory and code and despite the fact that weights of numerous well-known networks are readily available for download and deployment in frameworks like [54], the specific data used in this project necessitate custom training. Most backbones are typically pre-trained on purified and cleansed benchmarks. [Page 149010, Column 1, Paragraph 1] Optimized to predict the contained 1000 object classes to high accuracy, this model serves as a feature extractor, effectively transforming high-dimensional images into a lower dimensional latent space in which similar input should have a certain proximity. Tapping into its architecture after the last pooling layer allows to summarize its 2048 activations as multivariate Gaussians by fitting mean and covariance to the respective data distribution under consideration. [Page 149013, Column 2, Section 2)] modal-specific encoders. [Page 149023, Column 1, Last Paragraph] These continuous latents were then decoded into image space to fit multivariate Gaussians to the validation dataset as explained in section III-A5); selecting samples of the multivariate probability distribution of latent features determined by the parameters ([Page 149010, Column 2, Paragraph 3] To obtain a comprehensive notion of the models' versatility, their latent space utilization for a size of K D jCj D 256 is recorded separately for every latent variable over the validation dataset. To yield a reproducible result, the modes of the data-induced PMF, given by equation (23) are accumulated for every input sample and displayed in Figure 29 for both domains); wherein the samples are selected deterministically, such that each selected sample represents a separate region of the multivariate probability distribution of the latent features ([Page 149005, Column 1, Last Paragraph] In fact, it can be considered a variant of the reparameterization trick proposed in [36] which turns sampling of the latents [Page 149005, Column 2, Paragraph 1] into a deterministic function of the encoders logits and some independent additive noise from a predetermined distribution. [Page 149023, Column 1, Last Paragraph] These continuous latents were then decoded into image space to fit multivariate Gaussians to the validation dataset as explained in section III-A5.), and wherein the multivariate probability distribution of the latent features is sampled in a raster-like manner via a totality of the selected samples to form a raster ([Page 149008, Column 1, Paragraph 3] To include genuine discrete sampling in the validation probing, the encoders output is also used to define N true categoricals as a third alternative. Sampling these adds slight regularization and some degree of fuzziness to the selection of the probabilistic latents. Moreover, excluding the temperature influence allows to examine the impact of an increasingly confident encoder on the discretization and restoration capabilities of the model. The sample spaces for the last two index collections consist of finite integer sequences s E NN with N=256 representing the compressed input image of the respective modality in raster order. [Page 149013, Column 2, Last Paragraph] As camera and radar input dimensions are fixed [Page 149014, Column 1, Paragraph 1] a priori, so is the length N = hxw of both sequences created line by line using raster order for radar). However, Ditzel does not explicitly disclose: and using a pre-trained decoder network to transform each of the selected samples into an output set of a plurality of output sets. Shamsolmoali teaches, in an analogous system: and using a pre-trained decoder network to transform each of the selected samples into an output set of a plurality of output sets ([Page 8, Column 1, Algorithm 1] Dec is the pre-trained decoder which chooses the appropriate pixels from G(z) for image recovery). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer-implemented method for predicting future developments of a traffic scene of Ditzel to incorporate the teachings of Shamsolmoali to use a pre-trained decoder network to transform each of the selected samples into an output set of a plurality of output sets. One would have been motivated to do this modification because doing so would give the benefit of choosing the appropriate pixels from G(z) for image recovery as taught by Shamsolmoali [Page 8, Column 1, Algorithm 1]. Regarding claim 2: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 1 (as shown above). Ditzel further teaches: further comprising: adapting the raster formed by the selected samples to the multivariate probability distribution of the latent features using raster distances between the selected samples being selected based on a weight of individual selected samples in the multivariate probability distribution of the latent features ([Page 149011, Column 2, Paragraph 2] Figure 14 contrasts this variational entity of the camera with that of the radar models across consecutive validation runs performed after every training epoch. All measures are with regard to a single latent symbol facilitating the association with an actual number of bits required to transmit its state through the network. Figuratively speaking, perplexity measures the amount of randomness in the model and quantifies how well the associated process predicts samples. It calculates the weighted average number of choices each latent variable is offered. [Page 149013, Column 2, Last Paragraph] As camera and radar input dimensions are fixed [Page 149014, Column 1, Paragraph 1] a priori, so is the length N = hxw of both sequences created line by line using raster order for radar. [Page 149023, Column 1, Last Paragraph] These continuous latents were then decoded into image space to fit multivariate Gaussians to the validation dataset as explained in section III-A5). Regarding claim 3: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 2 (as shown above). Ditzel further teaches: wherein: at least a portion of the selected samples include noise ([Page 149005, Column 1, Last Paragraph] In fact, it can be considered a variant of the reparameterization trick proposed in [36] which turns sampling of the latents [Page 149005, Column 2, Paragraph 1] into a deterministic function of the encoders logits and some independent additive noise from a predetermined distribution), and the raster distances between the selected samples is maintained ([Page 149013, Column 2, Last Paragraph] As camera and radar input dimensions are fixed [Page 149014, Column 1, Paragraph 1] a priori, so is the length N = hxw of both sequences created line by line using raster order for radar). Regarding claim 4: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 1 (as shown above). Ditzel further teaches: wherein a predetermined number of the samples are selected ([Page 148998, Column 1, Section II, Paragraph 1] The presented experiments were conducted on a custom dataset comprising roughly 50,000 samples of time synchronized radar and camera images. Note: 50,000 corresponds to predetermined). Regarding claim 5: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 1 (as shown above). Ditzel further teaches: wherein a determination of a number of the samples to be selected and/or the selection of the samples is based on: a time available for generating the plurality of output sets ([Page 148998, Column 1, Section II, Paragraph 1] The presented experiments were conducted on a custom dataset comprising roughly 50,000 samples of time synchronized radar and camera images. [Page 149013, Column 2, Last Paragraph] Concretely, time-synchronized samples of both domains are encoded into their discrete counterparts by means of the pretrained modal-specific encoders, as explained in section III-A, with all of their weights frozen); a comparison of a totality of the generated plurality of output sets to a probability distribution of the plurality of output sets ([Page , Column 2, Last Paragraph] More precisely, shrinking the sample space to only a few categories KO K comprising the bulk of the probability mass increases both the sample quality and reliability by preventing low-probability outcomes); a similarity of the selected samples to training data of the pre-trained encoder network and the pre-trained decoder network; and/or if the totality of the generated plurality of output set provides a plurality of different, predetermined results ([Page 149009, Column 1, Paragraph 1] Table 2 to table 5 show the results for both modalities, different vocabulary sizes and sampling methods). Regarding claim 8: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 1 (as shown above). Ditzel further teaches: further comprising: generating a possible future trajectory for at least one participant in the traffic scene as one of the output sets of the generated plurality of output sets, and identifying different modes for a future development of the traffic scene based on a totality of the generated plurality of output sets ([Page 149039] Figure 49. Typical fail cases that occur during the probabilistic inference phase (red) below the actual camera ground truth (blue). Note: Top row 2nd block for example corresponds to a possible future trajectory for at least one participant in the traffic scene as one of the output sets. All the blocks correspond to a totality of the generated plurality of output sets. Red and blue correspond to the different modes). Regarding claim 9: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 8 (as shown above). Ditzel further teaches: further comprising: generating probabilities for a prespecified number of the different modes for the future developments of the traffic scene as one of the output sets of the generated plurality of output sets, wherein the totality of the generated plurality of output sets is taken as a basis for a further prediction step and/or planning step ([Page 149023, Column 2, Paragraph 1] Even the models with modest dictionary sizes of K =64 sample the largest-probability category only about 3 of 4 times, with minor differences between modalities. This does not necessarily harm the overall density estimation and camera sequence prediction goal since other categories might be almost equally suitable candidates, exhibiting probabilities similar to the modes of the distributions). Regarding claim 10: Claim 10 is substantially similar to claim 1 and therefore is rejected on similar grounds as claim 1. Regarding claim 11: The combination of Ditzel and Shamsolmoali teaches: The system according to Claim 10 (as shown above). Ditzel further teaches: wherein the encoder network and the decoder network are components of a variational autoencoder architecture or a conditional variational autoencoder architecture ([Page 149002, Column 2, Section A] Variational autoencoders [Page 149003, Column 1, Last Paragraph] Categorical variational autoencoders [39] are a special case of variational inference models described in the former section, most often used when a discrete probabilistic selection of features is desired, as in [Page 149003, Column 2, Paragraph 1] the present case. On an abstract level this architecture consists of an encoder and decoder part with a discrete stochastic bottleneck in between). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ditzel et al (GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies, 2021) in view of Shamsolmoali et al (Road Segmentation for Remote Sensing Images using Adversarial Spatial Pyramid Networks, 2020) and further in view of Peled et al (Online Predictive Optimization Framework for Stochastic Demand-Responsive Transit Services, 2019). Regarding claim 6: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 1 (as shown above). However, the combination of Ditzel and Shamsolmoali does not explicitly disclose: wherein the scene-specific information is transformed into an expected value vector and a covariance matrix of a multivariate normal distribution of the latent features. Peled teaches, in an analogous system: wherein the scene-specific information is transformed into an expected value vector and a covariance matrix of a multivariate normal distribution of the latent features ([Page 7, Paragraph 2] While the expected value formulation may be simple to obtain, its solution nonetheless lacks robustness. [Page 21, Section 4.1.1. Paragraph 4] multivariate normal distribution with covariance matrix. [Page 21, Section 4.1.2., Paragraph 1] In this study, we assume that only travel demands are stochastic, while other parameters, such as travel time between nodes, remain constant over time. To construct the copula, we assume for simplicity that the correlation structure of the data is time-invariant. As such, it suffices to compute the covariance matrix of the copula offline once, based on historical data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ditzel and Shamsolmoali to incorporate the teachings of Peled wherein the scene-specific information is transformed into an expected value vector and a covariance matrix of a multivariate normal distribution of the latent features. One would have been motivated to do this modification because doing so would give the benefit of this computation being efficiently maintained online as new data becomes known, so that the copula retains the updated state of correlation as taught by Peled [Page 21, Section 4.1.2., Paragraph 1]. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ditzel et al (GenRadar: Self-Supervised Probabilistic Camera Synthesis Based on Radar Frequencies, 2021) in view of Shamsolmoali et al (Road Segmentation for Remote Sensing Images using Adversarial Spatial Pyramid Networks, 2020) and further in view of Novi et al (An integrated artificial neural network–unscented Kalman filter vehicle sideslip angle estimation based on inertial measurement unit measurements, 2018). Regarding claim 6: The combination of Ditzel and Shamsolmoali teaches: The method according to Claim 1 (as shown above). However, the combination of Ditzel and Shamsolmoali does not explicitly disclose: wherein at least one of the following methods is used for selecting the samples: unscented Kalman filter sampling; Gauss-Hermite quadrature Kalman filter sampling; cubature Kalman filter sampling; randomized unscented Kalman filter sampling; and asymmetric or symmetric localized cumulative distribution sampling. Novi teaches, in an analogous system: wherein at least one of the following methods is used for selecting the samples: unscented Kalman filter sampling; Gauss-Hermite quadrature Kalman filter sampling; cubature Kalman filter sampling; randomized unscented Kalman filter sampling; and asymmetric or symmetric localized cumulative distribution sampling ([Page 1865, Column 2, Paragraph 1] Using the unscented Kalman filter (UKF)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ditzel and Shamsolmoali to incorporate the teachings of Novi wherein at least one of the following methods is used for selecting the samples: unscented Kalman filter sampling; Gauss-Hermite quadrature Kalman filter sampling; cubature Kalman filter sampling; randomized unscented Kalman filter sampling; and asymmetric or symmetric localized cumulative distribution sampling. One would have been motivated to do this modification because doing so would give the benefit of allows calculation of the statistics of a random variable which is subject to a non-linear transformation as taught by Novi [Page 1865, Column 2, Paragraph 1]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen et al (US 20200114924 A1) discloses A system and method for utilizing a temporal recurrent network for online action detection that include receiving image data that is based on at least one image captured by a vehicle camera system. The system and method further include controlling a vehicle to be autonomously driven based on a naturalistic driving behavior data set that includes the at least one goal-oriented action. Choi et al (US 20180124423 A1) discloses Methods and systems for predicting a trajectory include determining prediction samples for agents in a scene based on a past trajectory. The prediction samples are ranked according to a likelihood score that incorporates interactions between agents and semantic scene context. A response activity is triggered when the prediction samples satisfy a predetermined condition. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAITANYA RAMESH JAYAKUMAR whose telephone number is (571)272-3369. The examiner can normally be reached Mon-Fri 9am-1pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571)272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.R.J./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Feb 17, 2023
Application Filed
Jan 02, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12293260
GENERATING AND DEPLOYING PACKAGES FOR MACHINE LEARNING AT EDGE DEVICES
2y 5m to grant Granted May 06, 2025
Patent 12147915
SYSTEMS AND METHODS FOR MODELLING PREDICTION ERRORS IN PATH-LEARNING OF AN AUTONOMOUS LEARNING AGENT
2y 5m to grant Granted Nov 19, 2024
Patent 11770571
Matrix Completion and Recommendation Provision with Deep Learning
2y 5m to grant Granted Sep 26, 2023
Patent 11769074
COLLECTING OBSERVATIONS FOR MACHINE LEARNING
2y 5m to grant Granted Sep 26, 2023
Patent 11741693
SYSTEM AND METHOD FOR SEMI-SUPERVISED CONDITIONAL GENERATIVE MODELING USING ADVERSARIAL NETWORKS
2y 5m to grant Granted Aug 29, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
26%
Grant Probability
48%
With Interview (+22.5%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 51 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month