Prosecution Insights
Last updated: April 19, 2026
Application No. 18/516,672

METHOD FOR OPERATING A MOTORIZED FLAP ARRANGEMENT OF A MOTOR VEHICLE

Final Rejection §103§112
Filed
Nov 21, 2023
Examiner
DIZON, EDWARD ANDREW IZON
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Brose Fahrzeugteile SE & Co. Kommanditgesellschaft Bamberg
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
42 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
79.7%
+39.7% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-5 and 7-20 are currently pending. Claims 1, 11, and 18 are currently amended. Claim 6 has been cancelled. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 15 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 15 is an improper dependent claim because it depends on Claim 6, which has been canceled. The Examiner will evaluate Claim 15 as if it depends directly from independent claim 1, which incorporates the subject matter of canceled Claim 6. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1- 13 and 15- 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (US 20200208460 A1), herein after will be referred to as Ma, in view of Venetsky et al. (US 20080085048 A1), herein after will be referred to as Venetsky. Regarding Claim 1, Ma teaches a method for operating a motorized flap arrangement of a motor vehicle, a drive arrangement comprising at least one drive assigned to a flap (A liftgate controlled by a hands-free gesture system; [0018]), a sensor arrangement and a control arrangement coupled to the drive arrangement and to the sensor arrangement (The sensor, control arrangement, and the drive arrangement coupled together; [0021] [0022]), sensor values relating to an operator action performed by an operator outside of the motor vehicle being sensed by the sensor arrangement (Sensor values detect the movements of the object at the rear end of the vehicle; [0022]), the sensed sensor values being checked by the control arrangement in a check routine for the presence of a valid operator action (Controller analyzes the sensor values as an actuation or non-actuation gesture; [0018]), and the drive arrangement being caused by the control arrangement to effect a motorized adjustment if a valid operator action is present (The controller arrangement will effect the motorized adjustment to actuate the liftgate to open or close; [0018]), wherein a predefined selection of characteristic values is determined from the sensor values in the check routine (The system samples the proximity sensor signals to generate proximity data points containing values derived from the sensors; [0072]), a recognition value of the operator action is calculated from the determined characteristic values using a predefined weighting function with respective weighting parameters assigned to the characteristic values (A probability value from the data points that utilizes a logistic regression probability function having regression coefficients assigned as the weighting parameters; [0068-0069]), and the operator action is classified as a valid operator action if a recognition criterion is fulfilled by the recognition value (Classifying the object movement as an actuation gesture when the calculated probability value exceeds the defined threshold; [0073]). Ma does not explicitly teach the characteristic values are redefined as a statistical characteristic values. However, Venetsky discloses a robotic gesture recognition system that computes statistical characteristics from sampled gesture data. Venetsky teaches extracting statistical features, such as mean and standard deviation values, from tracked gestures coordinates to improve the analysis and filtering of the gesture data system ([0079]). This teaching is equivalent to the claimed limitation because the computation of the mean and standard deviation transforms the raw sample points into statistical summary metrics that characterize the distribution of the gesture data to the redefined “statistical characteristic values”. Ma and Venetsky are considered to be analogous to the claim invention because they are in the same field of gesture recognition for motorized control systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Ma to incorporate the teachings of computing the mean and standard deviation from the raw gesture sample data as taught by Venetsky based on the motivation to filter out data noise and errors before applying the data to the machine learning classifier. This provides the benefit of ensuring that the gesture classification operates on high quality data that has been filtered and prevent erroneous actuation of the system that may be caused by environmental interference. Regarding Claim 2, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the weighting function is predefined as a linear function of the characteristic values with the weighting parameters as coefficients of the characteristic values (Generating the classification function using a linear algorithm (hard or soft margin learn SVM) and utilizing equations where coefficients (regression coefficients) act as the weighting parameters multiplied by the sampled characteristic values; [0065] [0068-0069]). Regarding Claim 3, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the predefined selection comprises characteristic values of distance values and/or velocity values from the sensor values, and/or the predefined selection comprises characteristic values of values derived from distance values and velocity values (The sampled proximity signal data points are distance values measuring the physical distance between the proximity sensor and the operator; [0022] [0052]) Regarding Claim 4, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the predefined selection comprises characteristic values of intensity values (The level of signal is analyzed which is equivalent to using the intensity value of the signal; [0022] [0052]) Regarding Claim 5, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches a check-time window is established from the sensed sensor values by the control arrangement on the basis a trigger criterion (The trigger criterion is initiated when the signal properties exceed the threshold; [0053]), and the sensor values sensed in the check time period are subjected to the check routine (The time space over which the signals are analyzed after the trigger occurs is the check-time window; [0055]). Regarding Claim 7, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches in the check routine, sensor values are fed as input values to a trained machine learning model (see at least Ma, Para [0063-0064]: “The learning module 116 may be configured to generate the classifier 164 by applying to the machine learning algorithm the following data: the proximity signals generated responsive to the detected object movement, or more particularly the training data points derived from the proximity signals… Each of the training data points may include a value sampled from the proximity signal generated by the proximity sensor 110A responsive to a given object movement during one of the learning modes and a value sampled from the proximity signal generated by the proximity sensor 110B responsive to the given object movement.”; see at least Ma, Para [0072]: “The access module 158 may then apply the proximity data points to the active classifier 164 to determine whether the object movement was an actuation gesture or a non-actuation gesture.”; The input values “proximity data points” are feed into the trained model “active classifier” during the check routine), and the recognition criterion relates to an output value of the trained machine learning model in addition to the recognition value (see at least Ma, Para [0073]: “Referring to FIG. 7, for example, responsive to determining that at least a set threshold number or at least a set threshold percentage of the proximity data points are in actuation class based on the function f(x) (e.g., a given proximity data point (xa, xb) is in the actuation class if xa is greater than f(xb)), the access module 158 may be configured to determine that the object movement is an actuation gesture.”; The recognition criterion exceeding the threshold probability directly relates to the output value (the probability) of the model). Regarding Claim 8, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the weighting parameters are obtained by the control arrangement via a communication network of the motor vehicle (see at least Ma, Para [0041]: “For example, the external computing device 172 may be coupled to the proximity sensors 110 of the vehicle 102, such as via the controllers 106 and/or a controller area network (CAN) bus of the vehicle 102. The learning module 116 of the external computing device 172 may be configured to generate the classifier 164 based on training data 162 derived from proximity signal sets generated by the proximity sensors 110 of the vehicle 102, as described in additional detail below. After the classifier 164 is generated by the external computing device 172, the classifier 164 may transferred to the vehicle 102 and/or other similar vehicles, such as the vehicle 170, for utilization by the access module 158 of the vehicle 102 and/or the other vehicles.”; The weighting parameters ‘classifiers’ can be generated on an external computer and transferred to the vehicle over a communication network). Regarding Claim 9, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches a plurality of sets of weighting parameters are provided, with an operating mode being assigned to each of them, when one of the operating modes occurs (Multiple classifiers containing weighting parameters assigned to user specific operating modes; [0075-0076]). Regarding Claim 10, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the weighting parameters are determined in a parameterization routine based on predefined parameterization sensor values in accordance with a parameterization rule (Determining regression coefficients(weighting parameters) via learning module (parameterization rule) using sensor training data (parameterization values) and a estimation algorithm (parameterization rule); [0070]). Regarding Claim 11, Ma teaches a control arrangement for a motorized flap arrangement of a motor vehicle, the control arrangement, when in the mounted state, being coupled to a sensor arrangement that senses sensor values relating to an operator action performed by an operator outside of the motor vehicle (The control arrangement (one or more controllers) is coupled to a sensor arrangement (proximity sensors) that senses an operator’s action; [0022]), the control arrangement checking the sensed sensor values in a check routine for the presence of a valid operator action (The control arrangement via access module, performs a check routine by applying sensor data to its classifier and determining a valid operator action of an actuation gesture or non-actuation gesture; [0034]), and the control arrangement causing a drive arrangement of the flap arrangement to effect a motorized adjustment if a valid operator action is present (The control arrangement configured to cause the drive arrangement (motor) to perform a motorized adjustment if the action is valid; [0034]), wherein the control arrangement is configured to determine a predefined selection of characteristic values from the sensor values in the check routine (The system samples the proximity sensor signals to generate proximity data points containing values derived from the sensors; [0072]), to calculate a recognition value of the operator action from the determined characteristic values using a predefined weighting function with respective weighting parameters assigned to the characteristic values (A probability value from the data points that utilizes a logistic regression probability function having regression coefficients assigned as the weighting parameters; [0068-0069]), and to classify the operator action as a valid operator action if a recognition criterion is fulfilled by the recognition value (Classifying the object movement as an actuation gesture when the calculated probability value exceeds the defined threshold; [0073]). Ma does not explicitly teach the characteristic values are redefined as a statistical characteristic values. However, Venetsky discloses a robotic gesture recognition system that computes statistical characteristics from sampled gesture data. Venetsky teaches extracting statistical features, such as mean and standard deviation values, from tracked gestures coordinates to improve the analysis and filtering of the gesture data system ([0079]). This teaching is equivalent to the claimed limitation because the computation of the mean and standard deviation transforms the raw sample points into statistical summary metrics that characterize the distribution of the gesture data to the redefined “statistical characteristic values”. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Ma to incorporate the teachings of computing the mean and standard deviation from the raw gesture sample data as taught by Venetsky based on the motivation to filter out data noise and errors before applying the data to the machine learning classifier. This provides the benefit of ensuring that the gesture classification operates on high quality data that has been filtered and prevent erroneous actuation of the system that may be caused by environmental interference. Regarding Claim 12, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches a flap arrangement for a motor vehicle (see at least Ma, Para [0021]: “The system 100 may include a vehicle 102 with a hands-free liftgate 104. The liftgate 104 may be a powered liftgate. The liftgate 104 may be coupled to a motor, which may be coupled to one or more controllers 106 of the vehicle 102.”). The flap arrangement is described as performing the functions of the method established in Claim 1 above. Regarding Claim 13, Ma and Venetsky remains as applied above in Claim 12. Ma further teaches the sensor arrangement has at least one radar sensor (see at least Ma, Para [0023]: “Alternatively, one or more of the proximity sensors 110 may be an inductive sensor, a magnetic sensor, a RADAR sensor, or a LIDAR sensor.”). Regarding Claim 15, Ma and Venetsky remains as applied above in Claim 1. Ma does not explicitly teach the characteristic values of the selection comprise at least one of: maximum value, minimum value, time point of maximum value, time point of minimum value, mean value, variance, skewness, and/or kurtosis. However, Venetsky discloses a robotic gesture recognition system that computes statistical characteristics from sampled gesture data. Venetsky teaches extracting statistical features, such as mean and standard deviation values, from tracked gestures coordinates to improve the analysis and filtering of the gesture data system ([0079]). This teaching is equivalent to the claimed limitation because the sample points area used to calculate the mean and standard deviation (variance). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ma to incorporate the teachings of selecting characteristic values comprising the mean value and standard deviation as taught by Venetsky based on the motivation to improve the accuracy of the detection of system and filter out erroneous sensor outliers. This provides the benefit of ensuring that the gesture classification model operates on quality data that has been filtered preventing erroneous actuation of the system due to noisy or scattered sensor readings. Regarding Claim 16, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the trained machine learning model is based on a support vector machine (see at least Ma, Para [0064]: “FIG. 7, for example, is a graph of exemplary training data 162 and of an exemplary classifier 164 generated by application of the training data 162 to a machine learning algorithm that is a support vector machine.”). Regarding Claim 17, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches the weighting parameters are obtained by the control arrangement via a communication network of the motor vehicle, transmitted from a central motor-vehicle control system (Transferring the classifier containing the weighting and parameters via vehicle communication network (CAN bus) from a main computing device or centralized controller storage to the access module; [0041] [0042]). Regarding Claim 18, Ma and Venetsky remains as applied above in Claim 7. Ma further teaches the trained machine learning model is stored in the control arrangement (The classifier is the train machine learning model that is part of the controller data that resides in the non0voltile storage of the controller; [0035]). Regarding Claim 19, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches a plurality of sets of weighting parameters are provided, with an operating mode being assigned to each of them (Providing multiple user specific classifiers containing unique weighting parameters assigned to specific operational modes for different authorized users; [0075-0076]), when one of the operating modes occurs, as a result of an operating-mode criterion being fulfilled by the sensor values and/or as a result of an operating-mode signal being obtained by the control arrangement, the set of weighting parameters assigned to the respective operating mode is used in the check routine (Obtaining a signal ID from a device to trigger the specific user mode and utilizing the corresponding retrieved classified (weighting parameters) in the gesture check routine; [0076]). Regarding Claim 20, Ma and Venetsky remains as applied above in Claim 10. Ma further teaches that the recognition criterion is defined, at least partly, as a threshold value for the recognition value (Evaluating the calculated probability (recognition value) against a predefined threshold probability (threshold value) to define whether the gesture is valid; [0073]), and the weighting parameters are determined using a parameterization rule based on linear regression (Calculating regression coefficients using a linear algorithm and iteratively reweighted least squares as a parameterization rule based on solving linear regression models; [0065] [0070]). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Ma in view of Venetsky, as applied in claim 1, in view of Johansson et al. (US 12264529 B2), and herein after will be referred to as Johansson. Regarding Claim 14, Ma and Venetsky remains as applied above in Claim 1. Ma further teaches using distance and velocity values from the sensors (see at least Ma, Para [0022]: “…each proximity signal may illustrate the movement of the object towards and then away from the proximity sensor 110 over time, such as by indicating the changing distance between the object and proximity sensor 110 over time.”; see at least Ma, Para [0050]: “Exemplary actuation gestures performed by the user may include, without limitation, kicks towards and/or under the rear end 108 of the vehicle 102 that include one or more of the following characteristics: a relatively slow kick, a regular speed kick, a relatively fast kick…”; Analyzing different kick speeds is equivalent to deriving the velocity values from the sensor data.) Ma does not explicitly teach using the statistical model of a cross-correlation on the distance values and velocity values. However, Johansson, in the same field of endeavor, teaches the using cross-correlation analysis to provide more reliable and accurate estimations (see at least Johansson, Col 10 lines 18-21: “…the control unit may analyze cross-correlation between various items in an effort for providing more reliable and accurate estimation of the user's preference.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the gesture recognition system as taught by Ma, to incorporate the teachings of applying the statistical analysis tool of cross-correlation in Johansson, to the distance and velocity values from the sensor. This provides the benefit of capturing the relationship between the user’s forward and retracting motions to the machine learning model for improved reliability. Prior Art The prior art made of record and not relied upon is considered pertinent, most relevant, to applicant's disclosure. Wuest (US20250198221A1) Pfister (US20230266718A1) Wörner (DE102015003666A1) Battlogg (US11873671) Response to Arguments Applicant’s arguments, see Page 6 and 7, filed 11/21/2025, with respect to the rejection(s) of claim(s) 1 and 11 30 under 35 USC § 103 have been fully considered. The Applicant argues that the machine learning model as described in Ma does not correspond to the determination of characteristic values and the use of a predefined weighting function defined in claim 1. The Examiner respectfully disagrees. Ma explicitly details the integration of a predefined weighting function via its logistic regression machine. Ma teaches sampling sensor signals to determine data points xa and xb (characteristic values) and calculating a probability recognition value using the logistic regression equation (a predefined weighting function) with regression coefficients (weighting parameters) ([0068-0073]). Applicant argues that Ma fails to teach that characteristic values are redefined as statistical and that the statistical characteristic values must be the direct inputs utilized by the predefined weighting function to classify the action rather than the slope of the signal as a trigger threshold as taught by Ma. The Applicant narrowed the interpretation arguing that a statistical measure used to trigger the action does not correspond to the use of a predefined weighting function for the statistical characteristic values. The Examiner’s interpretation of a statistical characteristic value was broadly read on the signal rate of change (slope) used in the check routine. However, upon further consideration, a new ground of rejection is made based on the combination of Ma et al. (US 20200208460 A1) in view of Venetsky et al. (US 20080085048 A1). Venetsky discloses a robotic gesture recognition system that uses a statistical method for filtering errors by converting data into statistical characteristic values (mean and standard deviation) to filter the quality of new data points against the limits of standard deviation prior to classifying the motion ([0079-0080]). Accordingly, the claims remain rejected based on a new ground of rejection necessitated by the amended claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD ANDREW IZON DIZON whose telephone number is (571)272-4834. The examiner can normally be reached M-F 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD ANDREW IZON DIZON/Examiner, Art Unit 3663 /ANGELA Y ORTIZ/ Supervisory Patent Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Jul 11, 2025
Non-Final Rejection — §103, §112
Nov 21, 2025
Response Filed
Feb 20, 2026
Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month