Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,019

Gesture-Based Systems For Human Following

Final Rejection §103
Filed
Aug 25, 2023
Examiner
TRAN, LONG T
Art Unit
3747
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Deere & Company
OA Round
4 (Final)
83%
Grant Probability
Favorable
5-6
OA Rounds
2y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
1114 granted / 1343 resolved
+12.9% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
28 currently pending
Career history
1371
Total Applications
across all art units

Statute-Specific Performance

§101
1.2%
-38.8% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
44.6%
+4.6% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1343 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Remarks filed January 6, 2026 has been reviewed by the Examiner. An Applicant initiated interview was conducted on December 22, 2025. Claims 1 – 20 remain pending in the application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1 – 6, 8 – 13, 15 – 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (US 2023/0331259). Regarding Claim 1: Wu et al. teaches a method comprising: detecting a human in a first image captured using an image sensor (step 401 uses camera 130 to identify a person within a range of vehicle 100) connected to a vehicle; inputting at least a portion of the first image to a first machine learning model (via neural network described in paragraph 0175, and the controller 112 uses data points to learn and store the images as described in paragraphs 0151 – 0153) to obtain a first pose of the human; comparing the first pose to pose parameters for an authentication gesture (paragraph 0125 describes matching obtained image data of person with prestored data); authenticating the human based on a match between the first pose and the authentication gesture to enable gesture commands from the human (steps 402 and 403 along with paragraph 0125 describe the authentication process via image matching with prestored data); inputting at least a portion of a second image captured using the image sensor to the first machine learning model to obtain a second pose of the human (via controller 112, as data images are obtained and learned, they are store for addition data sets and known posture/gesture, see paragraphs 0151 – 0153, 0175, and 0125 – 0128); comparing the second pose to pose parameters for a follow gesture (via steps 403 and 404, and as described in paragraphs 0124 – 0125, the use of gesture is obtained by the camera and used to determine a command) ; commencing a follow mode based on a match between the second pose and the follow gesture occurring after authentication of the human (the summoning the of the vehicle as described in the abstract); and controlling the vehicle to follow the human responsive to being in the follow mode (Figs 2 – 3 and 7 – 9 show various gestures that control the vehicle to move forward a certain distance, turn, park, and other various movements that would allow the vehicle to follow the person commanding it and thus, “summon” the vehicle that equates to a follow mode). Wu et al. is silent to wherein the first pose includes a position of a wrist, an elbow, a shoulder, a waist, a knee, an ankle, or a knuckle. However, Wu et al. in paragraph 0006 teaches “The identifier information of the person includes but is not limited to facial information of the person, figure information of the person, and fingerprint information of the person” with emphasis on “figure information of the person.” The Examiner has broadly but reasonably interpreted this phrase to include human body parts and limbs, and thus, would encompass a person’s elbow, shoulder, waist, knee, etc. in a pose as captured. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to provide the captured figure information as taught by Wu et al. since the series of body movements and captured poses are used to differentiate a target person from a non-targeted user in order to authenticate and allow a gesture based movement to summon the vehicle (paragraph 0006 of Wu et al.). Regarding Claim 2: Wu et al. teaches tracking the human in video captured using the image sensor after the authentication of the human (via 140, paragraph 0093). Regarding Claim 3: Wu et al. teaches determining a distance of the human from the vehicle based on the video captured using the image sensor during the follow mode (step 401, and abstract); terminating the follow mode responsive to the distance of the human from the vehicle exceeding a threshold (paragraph 0010, when exceeding a range); and stopping the vehicle responsive to terminating the follow mode (also from paragraph 0010, an example of stopping the vehicle a certain distance based on the vehicle’s range from the person and/or command). Regarding Claim 4: Wu et al. teaches receiving a gesture configuration command; responsive to the gesture configuration command, iteratively inputting at least portions of a sequence of images captured using the image sensor to the first machine learning model to obtain a set of poses and comparing the poses in the set of poses using a distance metric for poses until an average distance between poses in the set of poses is below a threshold; determining a new set of pose parameters based on the set of poses; and storing the new set of pose parameters in a gesture record associated with a command for the vehicle (Figs 2 – 3, 7 – 9, paragraphs 0124 – 0126, and steps 400 – 405 describe the distances of the images obtained in relation to a range, and since these images are taken continuously as the person/object moves and makes gestures, the learned movements and distance determines the new range, which in itself becomes the new average based on the newly defined range and threshold). Regarding Claim 5: Wu et al. teaches detecting the human in the first image comprises: inputting the first image to a deep neural network to obtain a bounding box for the human in the first image (paragraph 0006 describes limiting the image to portions of a person, such as the face, which meets the conditions of a bounding box). Regarding Claim 6: Wu et al. teaches a portion of the first image specified by the bounding box is input to the first machine learning model to obtain the first pose (paragraph 0006 describes limiting the image to portions of a person, such as the face, which meets the conditions of a bounding box; and a neural network is used as described in paragraph 0175). Regarding Claim 8: See the rejection of Claim 1 above. Regarding Claim 9: See the rejection of Claim 2 above. Regarding Claim 10: See rejection of Claim 3 above. Regarding Claim 11: See rejection of Claim 4 above. Regarding Claim 12: See rejection of Claim 5 above. Regarding Claim 13: See rejection of Claim 6 above. Regarding Claim 15: Wu et al. teaches actuators (via 102 and 106, fig 1) configured to control motion of the vehicle; and in which the processing apparatus is configured to control, using one or more of the actuators, the vehicle to follow the human (see abstract). Regarding Claim 16: Wu et al. teaches the processing apparatus is attached to the vehicle (Fig 2). Regarding Claim 17: See rejection of Claim 1 above. Regarding Claim 18: See rejection of Claim 2 above. Regarding Claim 19: See rejection of Claim 4 above. Regarding Claim 20: See rejections of Claim 5 and 6 above. Allowable Subject Matter Claims 7 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed January 6, 2026 have been fully considered but they are not persuasive. The Applicant argues that Wu et al. does not teach the limitation “wherein the first pose includes a position of a wrist, an elbow, a shoulder, a waist, a knee, an ankle, or a knuckle.” The Examiner disagrees and maintains the rejection because Wu et al. teaches in paragraph 0006 the identification of a “figure information of the person.” The Examiner understand the term “figure information” to equate to a person’s body shape and parts. This therefore, includes the limbs and parts of a human body that make up a pose. The is broad but reasonable interpretation of “figure information” is therefore understood by the Examiner to show that Wu et al. teach each and every limitation of the claim in an obvious manner to one of ordinary skill in the art at the time of the invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LONG T TRAN whose telephone number is (571)270-1899. The examiner can normally be reached Mon - Fri 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Logan Kraft can be reached at 571-270-5065. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LONG T TRAN/Primary Examiner, Art Unit 3747
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Feb 25, 2025
Non-Final Rejection — §103
May 20, 2025
Applicant Interview (Telephonic)
May 20, 2025
Examiner Interview Summary
May 28, 2025
Response Filed
Jun 30, 2025
Final Rejection — §103
Aug 27, 2025
Applicant Interview (Telephonic)
Aug 27, 2025
Examiner Interview Summary
Sep 30, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Oct 03, 2025
Non-Final Rejection — §103
Dec 22, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Examiner Interview Summary
Jan 06, 2026
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601315
CARBURETED ENGINE HAVING AN ADJUSTABLE FUEL TO AIR RATIO
2y 5m to grant Granted Apr 14, 2026
Patent 12600407
STEERING CONTROL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12600405
VEHICLE CONTROL DEVICE, VEHICLE CONTROL PROGRAM, AND VEHICLE CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12594989
COMPENSATION SYSTEM FOR COMPENSATING TRAILER SWAY OF A VEHICLE TRAILER OF A TOWING VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12594936
VEHICLE CONTROL APPARATUS
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
83%
Grant Probability
97%
With Interview (+13.8%)
2y 2m
Median Time to Grant
High
PTA Risk
Based on 1343 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month