Prosecution Insights
Last updated: April 19, 2026
Application No. 18/701,575

Method and Apparatus

Non-Final OA §102
Filed
Apr 15, 2024
Examiner
BEKELE, MEKONEN T
Art Unit
2699
Tech Center
2600 — Communications
Assignee
Oxa Autonomy Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
599 granted / 757 resolved
+17.1% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
780
Total Applications
across all art units

Statute-Specific Performance

§101
12.8%
-27.2% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
27.5%
-12.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 757 resolved cases

Office Action

§102
Detailed Action 1. Claims 1-18 are pending in this Application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 3. Claims 1-6 and 8-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Horia et al., (hereafter Horia), “Don’t Worry About the Weather: Unsupervised Condition-Dependent Domain Adaptation”, 2019 IEEE Intelligent Transportation Systems Conference, pub. 2019. As to claim 1, Horia teaches A computer-implemented method of an autonomous vehicle performing a task using a machine learning model, the computer-implemented method (Abstract, Fig. 1, section I, par. 1 for autonomous transportation and segmentation, detection tasks; section II.A and Fig. 3, section III.B for machine learning models) comprising: obtaining an image of an environment of the autonomous vehicle (Fig. 1 Top-left quadrant is the input night-time image, p. 34 left col. par. 3 for input images), applying the obtained image to a condition classifier(Fig. 4 for a domain classifier with output D and condition descriptor), wherein the condition classifier is configured to generate one or more values associated with a condition of the obtained image(section III.C and Fig. 4 , page 37 right col., lines 3-6, for a domain classifier with output D and condition descriptor, “For each frame in the buffer, we compute a length-128 condition descriptor using the penultimate layer of the classifier and average all the descriptors, yielding one single length-128 average descriptor”, p. 34 left col. par. 2 for a classifier-supervisor to determine a condition, section IV.A for example conditions); determining a parameterization of the machine learning model based on the one or more values(section III.E for online learning of GAN models and an input adapter based on the condition descriptor); and performing the task by applying the input image to the machine learning model with the determined parameterization. (section III.E for using the trained input adapter in the domain adaptation pipeline of Fig. 1 yielding a segmentation) As to claim 12, Horia teaches A computer-implemented method of training a machine learning model of an autonomous vehicle to perform a task using an input image, the computer implemented method (This limitation is discussed in claim 1 above ) comprising: obtaining a plurality of images with an unknown condition ( section III.C par. 2, III.E for input images with unknown conditions not in the training set); generating a predicted semantic map by applying the plurality of obtained images with the unknown condition to a machine learning model(Fig. 1 bottom for producing semantic maps also on unknown conditions, cf. section II1.E last bullet point for running the whole pipeline); optimizing parameters of the machine learning model by minimizing an error between the predicted semantic map and a semantic map ground truth to generate a parameterization of the machine learning model for the unknown condition(Fig. 5 for “performance of the tasks ( … ) is used as a corrective signal through ( … ) losses” equating to the claimed supervised learning of a semantic map, which is the task in Fig. 1; the differentiable paths in Fig. 5 include the adapters and tasks which are the optimized machine learning models); storing the generated parameterization of the machine learning model in a parameter database, the parameter database configured to store a plurality of machine learning models each having a different parameterization, each parameterization associated with a unique condition (section III.D for storing condition-specific input adapters with different parameters in a database). As to claim 2, Horia teaches the condition classifier comprises a neural network, wherein the one or more values comprise one or more predicted condition features and a prediction confidence (Fig.4,section III.C, page34 section B, last par., a semantic segmentation network is trained first with a day-time hand labelled dataset, and then used to predict labels on intermediary datasets recorded at incremental types of twilight, where the semantic segmentation network is a type of specialized deep neural network designed to classify every pixel in an image into a specific category. The prediction confidence is integral part of the semantic segmentation network). As to claim 3, Horia teaches wherein at least one of the one or more predicted condition features comprises an output of an activation function( Fig.4, page34 section B, last par., inherent, it is known that the Semantic segmentation networks utilize specific activation functions to introduce non-linearity, enabling pixel-wise classification). As to claim 4, Horia teaches the prediction confidence comprises a probability from an output layer of the neural network that the condition of the obtained image is one of one or more known conditions of images(Fig.4, page34 section B, page 36 section C, Semantic segmentation networks produce pixel-wise confidence, often represented as a probability map, where higher values indicate stronger model certainty in the assigned class label for each pixel. Confidence measures—such as entropy, max-softmax probability) As to claim 5, Horia teaches comparing the prediction confidence to a confidence threshold; and determining a degree of similarity between the one or more predicted condition features and one or more respective condition features of a known condition (section III.E for comparing descriptors equating to the step "determining a degree of similarity". Technical problem: providing a further test for the classified condition, a confidence threshold is a standard measure are integral part of the semantic segmentation networks). As to claim 6, Horia teaches, when the prediction confidence is above the confidence threshold, and when the degree of similarity of the one or more predicted confidence condition features is greater than a matching threshold, the method further comprises: retrieving a machine learning model from a parameter database, the retrieved machine learning model having a parameterization resulting from training the machine learning model using images having the condition matching of the obtained image, wherein the parameterization database includes a plurality of machine learning models each having a different parameterization derived from training the machine learning model using images having a different condition (sect. III.D for retrieving machine learning models based on descriptors (=condition features) in case of similarity, and sect. II1.E for online learning in case of similarity below a threshold. Specifically sect. III.D describes: The classifier described in Subsection III-C is used to select a set of optimal parameters to be used in the input adapter Fk. The memory can be queried in two ways: either by using an index k between 1 and N (the number of initial conditions) or by specifying a length-128 query descriptor and retrieving the set of parameters associated with the descriptor that is closest in the Euclidean space. To enable online learning of unseen domains. While sec. III.E describes if this average descriptor condition differs(in Euclidean space) by more than a threshold from the descriptors of any conditions previously trained on (i.e. the parameter memory S is unable to reliably identify the condition), the following training pipeline is triggered:); and performing the task by applying the obtained image to the retrieved machine learning model). As to claim 8, Horia teaches the prediction confidence is below the confidence threshold, and/or when the degree of similarity of the one or more predicted confidence condition features is less than a dissimilar threshold, the method further comprises: storing the retrieved image as an image with an unknown condition ( section. III E, Given a continuous sequence of incoming images, we store the current frame and T -1 past frames in a buffer of length T that gets updated using a First-In-First-Out. Thus, storing retrieved input images in memory is implicit in , irrespective of image or confidence conditions. As to claim 9, Horia teaches controlling the autonomous vehicle to traverse a route based on an outcome of performing the task ( page 34 section I, left col.,3rd par., The final stage of our approach allows a robot or vehicle to incrementally adapt to a new, unseen domain: if the condition of the input images does not match one that the system has been previously trained on, the unsupervised style transfer pipeline will select a model that is closest to the current condition, clone it, and fine-tune this cloned model to be able to change the style of the reference sequence, Further applying the segmentation results in an autonomous driving context, vehicle control to traverse a route is a standard measure in the art.) As to claim 10, Horia teaches the task is selected from a list including at least one of semantic segmentation, object detection, and object recognition(Fig.1, page 36 section B, first par.,). As to claim 11, Horia teaches the condition is selected from a list including at least one of a weather type, a grade of weather type, light, a grade of light, a time of day, and a season(Fig.1 page 33 right col., start by generating multimodal training data: from a database of image sequences categorized using the time of day and weather conditions at their moment of recording, we select a daytime, overcast, clear reference sequence). As to claim 13, Horia teaches generating the predicted semantic map by applying the plurality of obtained images with the unknown condition to the machine learning model comprises generating the predicted semantic map by applying the plurality of obtained images with the unknown condition to a machine learning model previously trained using images having a different condition to the unknown condition ( Abstract., section II1.E for "online learning" of an unknown condition, this involves applying machine learning models trained on different conditions on the input image until the models have been fine tuned for the new condition. a large improvements in semantic segmentation and topological localization) As to claim 14, Horia teaches the unknown condition and at least one of the unique conditions are each selected from a list including at least one of a weather type, a grade of weather type, light, a grade of light, a time of day, and a season(Fig.1 page 33 right col., start by generating multimodal training data: from a database of image sequences categorized using the time of day and weather conditions at their moment of recording, we select a daytime, overcast, clear reference sequence) As to claim 15, Horia teaches the task is selected from a list including at least one of semantic segmentation, object detection, and object recognition(Fig.1, page 36 section B, 1st par.,), As to claim 16, Horia teaches A non-transitory, computer-readable medium having instructions stored thereon that, when executed by the one or more processors, cause the one or more processor(Figs.1and 6-9, image segmentation algorithm that segment an image as shown in fig.1 and 6 are stored in a computer memory. When the image segmentation algorithm executed by a processor it carryout the image segmentation process a display the segmentation result as shown in Figs.1 and 2 ); regarding the remaining limitations, the limitans are similar to the limitation of claim 1. Thus, the rejection applied to claim 1 also applied to claim 16. As to claim 17, Horia teaches An autonomous vehicle ( page 38 section B, “We observe improvements in segmentation across the board, with the most important classes (vehicles, pedestrians, bicyclists etc., becoming distinguishable in even the most difficult conditions ) including storage, one or more processors, one or more image sensors, and one or more actuators, wherein the storage includes non-transitory, computer-readable medium having instructions stored thereon that, when executed by the one or more processors, cause the one or more processors (Figs.1and 6-9, image segmentation algorithm that segment an image as shown in fig.1 and 6 are stored in a computer memory. When the image segmentation algorithm executed by a processor it carryout the image segmentation process and display the segmentation result as shown in Fig.1 and 2 ); regarding the remaining limitations, the limitans are similar to the limitation of claim 1. Thus, the rejection applied to claim 1 also applied to claim 17). As to claim 18, Horia teaches, when the prediction confidence is below the confidence threshold, and/or when the degree of similarity of the one or more predicted condition features is less than a dissimilar threshold ( this limitations discussed in claim 8 above )storing the retrieved image as an image with an unknown condition; and performing, by the autonomous vehicle, a minimal risk maneuver ( this limitation is discussed in claim 8 above) Allowable Subject Matter 4. Claim 7 is objected to as being dependent upon a rejected base claims but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claim. 5. Regarding independent claim 7 no prior art is found to anticipate or render the following limitation obvious: “when the prediction confidence is above the confidence threshold, and when the degree of similarity of the one or more predicted confidence condition features is greater than a dissimilar threshold and below a matching threshold, the method further comprises: retrieving a machine learning model from a parameter database, the retrieved machine learning model having a parameterization resulting from training the machine learning model using images having a condition closest to the condition of the obtained image, wherein the parameterization database includes a plurality of machine learning models each having a different parameterization derived from training the machine learning model using images having a different condition; modifying the retrieved machine learning model by interpolating its parameterization using a difference between the predicted condition features and condition features of a condition associated with the retrieved machine learning model; and performing the task by applying the obtained image to the modified machine learning model having the interpolated parameterization.” Additional prior art but not applied in the rejection “ Semisupervised Semantic Segmentation by Improving Prediction Confidence” publication 29 March 2021; IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 33, NO. to Huaian Chen et al., disclosed: “Most of the recent image segmentation methods have tried to achieve the utmost segmentation results using large-scale pixel-level annotated data sets. However, obtaining these pixel-level annotated training data is usually tedious and expensive. In this work, we address the task of semisupervised semantic segmentation, which reduces the need for large numbers of pixel-level annotated images. We propose a method for semisupervised semantic segmentation by improving the confidence of the predicted class probability map via two parts. First, we build an adversarial framework that regards the segmentation network as the generator and uses a fully convolutional network as the discriminator. The adversarial learning makes the prediction class probability closer to 1. Second, the information entropy of the predicted class probability map is computed to represent the unpredictability of the segmentation prediction. Then, we infer the label-error map of the segmentation prediction and minimize the uncertainty on misclassified regions for unlabeled images. In contrast to existing semisupervised and weakly supervised semantic segmentation methods, the proposed method results in more confident predictions by focusing on the misclassified regions, especially the boundary regions. Our experimental results on the PASCAL VOC 2012 and PASCAL-CONTEXT data sets show that the proposed method achieves competitive segmentation performance. Index Terms— Class probability” ( see Abstract ) Contact Information Any inquiry concerning this communication or earlier communication from the examiner should be directed to Mekonen Bekele whose telephone number is (469) 295-9077.The examiner can normally be reached on Monday -Friday from 9:00AM to 6:50 PM Eastern Time. If attempt to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Eng, George can be reached on (571) 272-7495.The fax phone number for the organization where the application or proceeding is assigned is 571-237-8300. Information regarding the status of an application may be obtained from the patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished application is available through Privet PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have question on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217-919 (tool-free) /MEKONEN T BEKELE/Primary Examiner, Art Unit 2699
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602744
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602897
FACE DETECTION BASED FILTERING FOR IMAGE PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12586244
COMPOSITE IMAGE CAPTURE WITH TWO DEGREES OF FREEDOM CAMERA CAPTURING OVERLAPPING IMAGE FRAMES
2y 5m to grant Granted Mar 24, 2026
Patent 12561941
Video Shooting Method and Electronic Device
2y 5m to grant Granted Feb 24, 2026
Patent 12561761
PROGRESSIVE REFINEMENT VIDEO ENHANCEMENT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+13.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 757 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month