Prosecution Insights
Last updated: April 19, 2026
Application No. 18/710,207

MANEUVERING ASSISTANCE SYSTEM AND WORK VEHICLE

Final Rejection §103
Filed
May 15, 2024
Examiner
AN, IG TAI
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Tadano Ltd.
OA Round
2 (Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
82%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
292 granted / 523 resolved
+3.8% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
32 currently pending
Career history
555
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Summary The Amendment filed on 23 February 2026 has been acknowledged. Claims 1 – 2, 4 - 12 amended. Claims 14 – 20 are newly presented. Currently, claims 1 – 20 are pending and considered as set forth. Response to Amendment The Claim interpretation set forth in the previous office action is withdrawn as result of the applicant’s amendments to the claims. Response to Arguments Applicant’s arguments with respect to claims 1 – 13 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 – 3, 5 – 9, 13 and 19 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gustafsson from Applicant submitted IDS (WO 2019/066693 A1) in view of Friend (US 8918246 B2). As per claim 1, Gustafsson teaches the limitations of: A maneuvering assistance system that assists maneuvering of a work vehicle including a boom, (See at least abstract; An operator assistance system (2) for a vehicle (4) provided with a working equipment (6), the assistance system comprises an image capturing system (8) arranged at said vehicle and/or at said working equipment (6), and capable of capturing parameters related to images, of said working equipment (6) and of the environment outside said vehicle in a predetermined field of view (10). A processing unit (12) is provided configured to receive image related parameter signals (14) from said image capturing system (8) and to process said image related parameter signals (14), and a display unit (16) configured to present images to an operator.) comprising: a display that is transparent and that is capable of displaying an image (See at least page 13 paragraph 3; The display unit may be a display arranged e.g. at a control unit or in the vehicle. As an alternative, the display unit 14 is a pair of glasses, for example of the type sold under the trademark Hololens. The pair of glasses is structured to present the 3D representation such that the 3D representation is overlaid on the transparent glasses through which a user observes the object. Various additional information may also be presented as overlaid information and preferably presented such that the additional information is presented close to an illustrated part of the object.); a processor configured to generate a virtual image indicating a target member including the boom and/or a member that moves together with the boom (See at least page 8 paragraph 4 and page 13 paragraph 3; In one variation, which is schematically illustrated in figure 4, the camera units 18, 20 are mounted at separate mounting positions at the working equipment, e.g. a boom of a crane, and move thereby together with the crane. The definition that the camera units are mounted at different sides of the working equipment should 25 be interpreted broadly, and not being limited to different sides in a horizontal plane. The important aspect is that the view of sights obtained by the camera units mounted at the different sides cover parts that potentially may be occluded by the working equipment during normal use. Thus, the camera units may be mounted at different heights or at other positions where a full overview may be 30 obtained. In one exemplary variation two camera units are mounted at different sides of the working equipment, e.g. at the roof of the operator's cabin, and a third camera unit is mounted at an opposite end of the vehicle.); and a display controller configured to display the virtual image on the display in a superimposed manner on a landscape visible through the display in a mode in which a user of the display is capable of recognizing a position of the target member (See at least page 13 paragraph 3 and figure 2 and 5). PNG media_image1.png 419 621 media_image1.png Greyscale PNG media_image2.png 657 423 media_image2.png Greyscale Gustafsson does not explicitly teach but Friend teaches the limitation of: wherein the processor is configured to calculate a three-dimensional coordinate of the target member based on information about a position of the work vehicle, information about a posture of the work vehicle, and information about a shape of the work vehicle, and to computationally generate the virtual image based on the calculated three-dimensional coordinate (See at least figure 6 and 7 and column 8 line 4 - 44; Referring to FIG. 6, there is illustrated a computer executable routine 260 in the form of a flow chart that can be performed to generate augmentation contention for display to an operator. The routine 260 can be performed in addition to or instead of the control system 200 described in FIG. 5 and can be performed by an onboard controller or, in some embodiments, by an off-board computer system and the results can be transmitted to the operator display device. In a sensing step 262, the sensors disposed about machine determine the position of a movable work implement with respect to the rest of the machine. That information can be translated into implement position data 264 that is communicated to the controller for further processing. In addition to the implement position data 264, the controller may also receive implement dimensional data 266 that reflects the spatial dimensions of the work implement, for example, in Cartesian coordinates. In a calculating step 268, the implement position data 264 and the implement dimensional data 266 can be combined to determine the three-dimensional spatial volume of the work implement with respect to the machine. A result of the calculating step 268 is that both the position or orientation of the implement and its three-dimensional spatial extensions are known. In a generation step 270, the results of the calculating step 268 and possibly other information can be used to generate an augmentation overlay. The augmentation overlay may include a visual representation 272 of the work implement in, for example, the form of a wireframe model or shading. The visual representation 272 can further correspond in spatial shape and size to the actual physical work implement when the representation is displayed on the display. The augmentation overlay including the visual representation 272 are communicated to the operator display device and displayed thereon in a display step 274 in such a manner that the visual representation can be superimposed over the operator's view of work implement. Hence, the visual representation 272 augments the operator's perception of the worksite so that the perceived position of the work implement is readily discernable even if the view of the actual work implement is obstructed.) PNG media_image3.png 1044 715 media_image3.png Greyscale PNG media_image4.png 911 721 media_image4.png Greyscale As per claim 2, Gustafsson teaches the limitations of: wherein the target member is the boom and/or a hook that moves together with the boom, and the display controller configured to display the virtual image on the display at a portion coinciding with the position of the target member (See at least figure 4). As per claim 3, Gustafsson teaches the limitations of: wherein the virtual image is an image representing a shape of the target member (See at least figure 4). As per claim 5, Gustafsson teaches the limitations of: wherein the display controller configured to display the virtual image at a position on the display, the position being determined based on information about a position of the user of the display and information about the position of the target member, the position of the target member being calculated based on a posture of the work vehicle and a position of the work vehicle (See at least page 7 paragraph 1 – page 8 paragraph 1). As per claim 6, Gustafsson teaches the limitations of: wherein the processor is configured to determine the position on the display and to transmit information including the determined position on the display to the display controller (See at least page 8 paragraph 2). As per claim 7, Gustafsson teaches the limitations of: wherein the target member that the virtual image indicates is a hook that moves together with the boom, the virtual image is an image that has been captured by a camera that captures the hook from a distal end portion of the boom, and the display controller configured to display the virtual image on the display in a superimposed manner on a reference plane set on the landscape (See at least page 10 paragraph 2). As per claim 8, Gustafsson teaches the limitations of: wherein the processor is configured to set, as the reference plane, a ground plane on the landscape or a horizontal plane including a position of a distal end of the boom (See at least page 8 paragraph 3). As per claim 9, Gustafsson teaches the limitations of: wherein the processor is configured to set the reference plane based on a direction of visual recognition of the user of the display (See at least figure 4). As per claim 13, Gustafsson teaches the limitations of: A work vehicle comprising the maneuvering assistance system according to claim 1 (See at least abstract). As per claim 19, the combination of Gustafsson and Friend teaches the limitations of: wherein the processor is configured to calculate a current position including a position variation due to swinging of a reference member that moves together with the boom based on images captured at unit time intervals by a camera provided at a tip portion of the boom, and to generate the virtual image based on the current position (Friend, see at least figure 7). As per claim 20, the combination of Gustafsson and Friend teaches the limitations of: wherein the processor is configured to acquire an image including a reference member that moves together with the boom captured by a camera provided at a tip portion of the boom, and to calculate a current position of the reference member based on the image (Friend, see at least figure 6). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Gustafsson and Friend in view of Izumikawa from Applicant submitted IDS (JP2013113044A). As per claim 4, the combination of Gustafsson and Friend does not teach but Izumikawa teaches the limitations of: wherein the display controller configured to display the virtual image on the display so as to, when the target member is hidden behind an obstacle and is not visible, be superimposed on the obstacle (See at least paragraph 60). It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention was made to modify the display which displays virtual target object by superimposing image on the display of Gustafsson, to include the target member is hidden behind an obstacle and is not visible, be superimposed on the obstacle as taught by Izumikawa in order to improve the workability (See at least paragraph 60). Claims 10 – 12 are rejected under 35 U.S.C. 103 as being unpatentable over Gustafsson and Friend in view of Kanda from Applicant submitted IDS (JP2018095369A). As per claim 10, the combination of Gustafsson and Friend does not explicitly teach, but Kanda teaches the limitations of: wherein the virtual image includes a virtual image indicating a trajectory of a distal end portion of the boom, the trajectory corresponding to a posture of the boom, and the display controller configured to display the virtual image indicating the trajectory in a superimposed manner on the reference plane set on the landscape (See at least paragraph 82). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the virtual image includes a virtual image indicating a trajectory of a distal end portion of the boom, the trajectory corresponding to a posture of the boom, and the control unit displays the virtual image indicating the trajectory in a superimposed manner on the reference plane set on the landscape as taught by Kanda in the system of the combination of Gustafsson and Friend, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 11, the combination of Gustafsson, Friend and Kanda teaches the limitations of: wherein the display controller configured to display the virtual image on the display at a portion coinciding with a target position for the target member (Kanda, see paragraph 77 – 79 and 93). As per claim 12, the combination of Gustafsson, Friend and Kanda teaches the limitations of: wherein the virtual image includes a virtual image indicating a perpendicular line passing through a distal end portion of the boom, and the display controller configured to display the virtual image indicating the perpendicular line on the display at a portion corresponding to the distal end portion of the boom (Kanda, see at least figure 3 and 5). Claims 14 – 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gustafsson and Friend in view of Osterhout et al. (Hereinafter Osterhout) (US 10180572 B2). As per claim 14, the combination of Gustafsson and Friend does not explicitly teach, but Osterhout teaches the limitations of: wherein the display controller is configured to calculate information about a position of the user by recognizing a specific marker included in an image captured by a camera provided on the display (See at least column 93 line 54 – column 94 line 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the display controller is configured to calculate information about a position of the user by recognizing a specific marker included in an image captured by a camera provided on the display as taught by Osterhout in the system of the combination of Gustafsson and Friend, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 15, the combination of Gustafsson, Friend and Osterhout teaches the limitation of: wherein the display is worn by an operator who remotely operates the work vehicle from outside of the work vehicle, or by a slinger (Osterhout, see at least column 78 line 36 – column 79 line 2). As per claim 17, the combination of Gustafsson, Friend and Osterhout teaches the limitation of: wherein the display controller is configured to display the virtual image as a front-back inverted image when the reference plane is higher than a position of a horizontal line of sight of the user (Osterhout, see at least column 75 line 18 – 37). Claims 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gustafsson and Friend in view of Motoki et al. (Hereinafter Motoki) (WO 2023/100889 A1). As per claim 18, the combination of Gustafsson, and Friend does not teach the limitation of: wherein the display controller is configured to calculate information about a position of the user by Visual SLAM that performs map generation of a surrounding environment and self- position estimation based on image data. Motoki teaches the limitation of: wherein the display controller is configured to calculate information about a position of the user by Visual SLAM that performs map generation of a surrounding environment and self- position estimation based on image data (See at least page 21, paragraph 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the display controller is configured to calculate information about a position of the user by Visual SLAM that performs map generation of a surrounding environment and self- position estimation based on image data as taught by Motoki in the system of the combination of Gustafsson and Friend, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 18, the combination of Gustafsson, Friend and Motoki teaches the limitations of: wherein the virtual image includes a virtual image representing a wire rope suspended from the boom (Motoki, see at least figure 7 – 8). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. IG T AN Primary Examiner Art Unit 3662 /IG T AN/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

May 15, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §103
Feb 25, 2026
Response Filed
Apr 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594902
VEHICLE WITH CONTROLLED HOOD MOVEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12592171
VEHICULAR DRIVING ASSIST SYSTEM WITH HEAD UP DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12592067
EARLY WARNING METHOD FOR ANTI-COLLISION, VEHICLE MOUNTED DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12584745
DYNAMIC EASYROUTING UTILIZING ONBOARD SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12572144
GENERATING ENVIRONMENTAL PARAMETERS BASED ON SENSOR DATA USING MACHINE LEARNING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
82%
With Interview (+26.1%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month