Prosecution Insights
Last updated: April 19, 2026
Application No. 17/715,397

CONTROL SYSTEM OF CONSTRUCTION MACHINERY AND METHOD OF PROVIDING WORKING GUIDE LINE

Final Rejection §103
Filed
Apr 07, 2022
Examiner
KINGSLAND, KYLE J
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hd Hyundai Infracore Co. Ltd.
OA Round
6 (Final)
77%
Grant Probability
Favorable
7-8
OA Rounds
2y 10m
To Grant
84%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
164 granted / 212 resolved
+25.4% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
38 currently pending
Career history
250
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
24.5%
-15.5% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 212 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This Office Action is in response to the amendments and/or arguments filed on November 21, 2025. Claims 1-4, 7-13, and 16-19 are presently pending and are presented for examination. Response to Arguments Applicant’s arguments, see Pages 8-9, filed November 21, 2025, with respect to the rejection(s) of claim(s) 1-4, 7-13, and 16-19 under 35 U.S.C. 102 and/or 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Morita et al. (JP 2017155563; hereinafter Morita; see attached English translation for citations). Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-4, 7-13, and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugawara et al. (US 20210209799; already of record) in view of Do et al. (KR 20190060127; hereinafter Do; see attached English translation for citations; already of record) further in view of Morita et al. (JP 2017155563; hereinafter Morita; see attached English translation for citations). In regards to claim 1, Sugawara discloses of a control system for construction machinery (“the work machine 1 will be suitably referred to as an excavator 1 (Para 0039, Fig 1), the control system comprising: a 3D camera configured to recognize an object located around the construction machinery and capture an image of the object (“The photographing subject SB is three-dimensionally measured by using stereo image data by a pair of cameras 30” (Para 0054), See also Para 0084, 0051); a position information receiving device configured to obtain a reference coordinate corresponding to a position of the construction machinery (“the position detector 23 includes a GPS receiver. The position detector 23 is provided at the swing body 3. The position detector 23 detects an absolute position that is a position of the swing body 3 defined in the global coordinate system. The absolute position of the swing body 3 includes coordinate data in the Xg axis direction, coordinate data in the Yg axis direction, and coordinate data in the Zg axis direction” (Para 0060)); a data processing device configured to recognize at least one relative coordinate of the object relative to the construction machinery from the image, and convert at least one relative coordinates based on at least one reference coordinate to obtain a three-dimensional coordinate (“the stereo measurement unit 102 applies coordinate conversion to the three-dimensional data DG of the photographing subject SB in the camera coordinate system, and calculates three-dimensional data DG of the photographing subject SB in the vehicle body coordinate system. Additionally, the stereo measurement unit 102 applies coordinate conversion to the three-dimensional data DG of the photographing subject SB in the vehicle body coordinate system, and calculates three-dimensional data DG of the photographing subject SB in the global coordinate system” (Para 0083), see also 0084, 0082, 0033); and a control device configured to display the image having the three-dimensional coordinates on a screen (“the display control unit 108 causes the display device 26 to display the three-dimensional data DG created by the map data creation unit 107.” (Para 0095) wherein “The three-dimensional data DG includes … three-dimensional data of the vehicle body coordinate system and the like” (Para 0094), See also Para 0199-0200; wherein, it is noted that the claim as written does not require that both the image and the three-dimensional coordinates are displayed on the screen, but merely that the image that is displayed has three-dimensional coordinates according to the limitations recite previously. Regardless, in the interest of compact prosecution, the examiner presented a reference that included displaying a image that included three-dimensional coordinates. Secondly, it is noted that “image” can be given the broadest reasonable interpretation, where it is noted that a reasonable interpretation of displaying an image would include, but is not limited to, displaying map or pixel data that was acquired by a camera or other sensor. The examiner states that “a control device configured to display the image having the three-dimensional coordinates on a screen “ is fully disclosed within Sugawara, where “The display control unit 108 causes the display device 26 to display the first image data MR1, second image data ML1, third image data MR2, and fourth image data ML2 acquired by the image data acquisition unit 101. Additionally, the display control unit 108 causes the display device 26 to display the disparity image data SG generated by the stereo measurement unit 102. Furthermore, the display control unit 108 causes the display device 26 to display the three-dimensional data DG created by the map data creation unit 107.” (Para 0095), where it is noted that “The map data creation unit 107 creates three-dimensional data DG on the basis of disparity image data SG. The three-dimensional data DG includes disparity image data SG, three-dimensional data of the vehicle body coordinate system and the like, and three-dimensional map data (elevation map data) described later.” (Para 0094). It is noted that a combination of images are collected and displayed (ML1, ML2, MR1, MR2), and that three-dimensional coordinates are also displayed (a portion of the data DG that is displayed). This is additionally supported by Para 0097-0098, which, in part, recites “image data MR photographed by the first camera 30R (30A, 30C) will be suitably referred to as first image data MR (MR1, MR2), and the image data ML photographed by the second camera 30L (30B, 30D) will be suitably referred to as second image data (ML1, ML2). A method of calculating three-dimensional data DG by the first camera 30A and the second camera 30B of the first stereo camera 301 is similar to a method of calculating three-dimensional data DG by the third camera 30C and the fourth camera 30D of the second stereo camera 302.” Therefore image data is displayed that was captured by the cameras. It can be additionally noted that images are displayed of this data within at least Figs 16, 7-8, and 18. Therefore the claim limitation is fully taught )… and wherein the control device is further configured to control at least one movement of the construction machinery. (“As illustrated in FIG. 2, an operator's seat 4S and an operating device 35 are arranged in the operating room 4. The operating device 35 is operated by an operator in order to operate the work unit 2 and the swing body 3. The operating device 35 includes a right operating lever 35R and a left operating lever 35L. An operator boarding the operating room 4 operates the operating device 35 to drive the work unit 2 and swing the swing body 3.” (Para 0057) and “The work unit 2 includes: a boom 6 connected to the swing body 3; an arm 7 connected to the boom 6; a bucket 8 connected to the arm 7; a boom cylinder 10 that drives the boom 6; an arm cylinder 11 that drives the arm 7; and a bucket cylinder 12 that drives the bucket 8. Each of the boom cylinder 10, arm cylinder 11, and bucket cylinder 12 is a hydraulic cylinder driven by hydraulic pressure.” (Para 0044). However, Sugawara does not specifically disclose of wherein the image includes a photographic image; and wherein the data processing device adds distance data for a maximum working radius of the construction machinery to the three-dimensional coordinates, and the control device displays the distance data on the screen, wherein the distance data is derived from data provided from the 3D camera; and wherein the control device displays design drawing data of a construction site in association with the three-dimensional coordinate on the screen, wherein the design drawing data comprises necessary materials, and wherein the design drawing data is input from outside the control device. Do, in the same field of endeavor, teaches of wherein the image includes a photographic image (“The operation radius of the excavator 100 is provided in the interior of the excavator 100 with the help of the machine guidance and outputted on the monitor on which the image photographed by the camera 200 is outputted.” (Page 3 Para 0004) and “In the mapping step S600, each point is mapped onto the image photographed by the camera 200, and then the maximum working radius P1 and P2, two-thirds of the maximum working radius P3 and P4, (P5, P6) are displayed in a straight line, and the colors of the lines connecting the points are displayed differently according to the sections, so that the operator can more easily confirm the working radius.” (Page 5 Para 0008), see also Fig 5), and wherein the data processing device adds distance data for a maximum working radius of the construction machinery to the three-dimensional coordinates, and the control device displays the distance data on the screen, wherein the distance data is derived from data provided from the 3D camera (“In the mapping step S600, each point is mapped onto the image photographed by the camera 200, and then the maximum working radius P1 and P2, two-thirds of the maximum working radius P3 and P4, (P5, P6) are displayed in a straight line, and the colors of the lines connecting the points are displayed differently according to the sections, so that the operator can more easily confirm the working radius.” (Page 5 Para 0008), “4 and 5, the process of displaying the working radius of the excavator 100 includes a position receiving step S100 for receiving coordinates of the maximum working stage of the excavator 100 and the center position of the excavator 100 from the machine guidance A point for calculating the position coordinates of a point representing the maximum working radius P1, P2 of the excavator 100, 2/3 (P3, P4) of the maximum working radius and 1/3 (P5, P6) A camera coordinate conversion step S400 for converting the position coordinates of each point on the basis of the position calculation step S200 of the excavator 100 and the camera 200 installed on the excavator 100, (S500) for converting the coordinates of the two-dimensional image taken by the camera 200 to the coordinates of the pixel on the two-dimensional image, and each point P1, P2, P3, P4, P5, And a mapping step (S600) for mapping the image to one image.” (Page 3 Para 0005), “Accordingly, in the present invention, each point on the three-dimensional world coordinate system is converted into coordinates on the pixel of the image photographed by the two-dimensional camera 200, and the two-dimensional camera 200 in the mapping step S600 It is possible to project each point to the exact position of one image.”: (Page 5 Para 0007), see also Fig 5 and Page 4 Para 0006-0007). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the displayed image having three-dimensional coordinates, as taught by Sugawara, to include the image being a photographic image and displaying distance data according to a maximum working radius, as taught by Do, with a reasonable expectation of success in order to allow the operator to more easily confirm the working radius (Do Page 5 Para 0008). However Sugawara in view of Do does not specifically teach of wherein the control device displays design drawing data of a construction site in association with the three-dimensional coordinate on the screen, wherein the design drawing data comprises necessary materials, and wherein the design drawing data is input from outside the control device. Morita, in the same field of endeavor, teaches of wherein the control device displays design drawing data of a construction site in association with the three-dimensional coordinate on the screen, wherein the design drawing data comprises necessary materials, and wherein the design drawing data is input from outside the control device (“Note that the three-dimensional design data Dd of the present embodiment includes data of a plan view (for example, see the plan view 400 of FIG. 7) included in the basic design data created in advance and an electronic reference point included in the basic design data. Reference point data indicating reference points such as triangle points and benchmarks, change point data indicating the slope change points of the longitudinal and cross sections of the target design shape included in the basic design data, preliminary surveys and past construction materials This is data created based on the data of the protection target obtained from the above (attribute data such as three-dimensional coordinate data and type). Specifically, the three-dimensional design data Dd according to the present embodiment is data including three-dimensional data of the target design surface and the protection target, and attribute data such as the type of protection target and constituent materials.” (Page 5 Para 0007), see also Page 8 Para 0002). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the control device displaying three-dimensional coordinates on the screen, as taught by Sugawara in view of Do, to include the displaying design drawing data associated with necessary materials of the construction machinery and being inputted from an outside device, as taught by Morita, with a reasonable expectation of success in order to prevent the work machine from damaging the target objects (Morita Page 5 Para 0006, Page 9 Para 0005). In regards to claim 2, Sugawara in view of Do further in view of Morita teaches of the control system of claim 1, wherein the data processing device includes: an image recognition module configured to extract the relative coordinate of the object from the image (“The stereo measurement unit 102 applies stereoscopic image processing to the first image data MR1 and second image data ML1 acquired by the image data acquisition unit 101, and calculates three-dimensional data DG of a photographing subject SB in the camera coordinate system” (Sugawara Para 0082), see also Sugawara 0083-0084, 0033); and a coordinate conversion module configured to coordinate-transform the relative coordinate of the object based on the reference coordinate to obtain the three-dimensional coordinate (“the stereo measurement unit 102 applies coordinate conversion to the three-dimensional data DG of the photographing subject SB in the camera coordinate system, and calculates three-dimensional data DG of the photographing subject SB in the vehicle body coordinate system. Additionally, the stereo measurement unit 102 applies coordinate conversion to the three-dimensional data DG of the photographing subject SB in the vehicle body coordinate system, and calculates three-dimensional data DG of the photographing subject SB in the global coordinate system” (Sugawara Para 0083), see also Sugawara 0084, 0082, 0033). In regards to claim 3, Sugawara in view of Do further in view of Morita teaches of the control system of claim 2, wherein the coordinate conversion module obtains a corrected relative coordinate through axis transformation of the relative coordinate, the axis transformation reflecting rotation angles for X-axis, Y-axis, and Z-axis of the construction machinery, and converts the corrected relative coordinates into an absolute coordinate to obtain the three-dimensional coordinate (“the stereo measurement unit 102 applies coordinate conversion to the three-dimensional data DG of the photographing subject SB in the camera coordinate system, and calculates three-dimensional data DG of the photographing subject SB in the vehicle body coordinate system. Additionally, the stereo measurement unit 102 applies coordinate conversion to the three-dimensional data DG of the photographing subject SB in the vehicle body coordinate system, and calculates three-dimensional data DG of the photographing subject SB in the global coordinate system” (Sugawara Para 0083), see also Sugawara 0084, 0082, 0033; wherein the corrected relative coordinate is disclosed as the vehicle body coordinate system and the absolute coordinate is the global coordinate system). In regards to claim 4, Sugawara in view of Do further in view of Morita teaches of the control system of claim 1, wherein the construction machinery is an excavator (“the work machine 1 will be suitably referred to as an excavator 1 (Sugawara Para 0039, Fig 1)). In regards to claim 7, Sugawara in view of Do further in view of Morita teaches of the control system of claim 6, wherein the data processing device extracts three-dimensional structure or a guide line installed for manual surveying from the image, the control device interlocks the extracted three-dimensional structure or the guide line installed for the manual surveying with the design drawing data, wherein interlocking comprises linking the extracted three-dimensional structure or the guide line installed for the manual surveying with the design drawing data for display (“Furthermore, the guidance image display processing unit 173 is based on the bucket toe coordinates (Xt, Yt, Zt) and the three-dimensional coordinates (Xp, Yp, Zp) of the protection target included in the three-dimensional design data Dd. A second distance Lp, which is a distance between the object and the protection target, is calculated. Specifically, as in the case of the first distance Lf, “Lp = {(Xp−Xt) .sup.2 + (Yp−Yt) .sup.2 + (Zp−Zt) .sup.2 } .sup.1/2 ” from the formula for the distance between two points. Ask.” (Page 7 Para 0005), “In step S110, in the guidance image display processing unit 173, the first distance Lf and the second distance Lp, the bucket toe coordinates (Xt, Yt, Zt), the body orientation D and the body coordinates (Xm, Ym, Zm), Guidance image data is generated based on the three-dimensional design data Dd. Then, an image display signal for displaying the guidance image is output to the display input device 70d based on the generated guidance image data, and the process proceeds to step S112.” (Page 10 Para 0010), see also Morita Page 10 Para 0008-0009 and Fig 10 and Sugawara Para 0084, 0203, 0095, and Figs 7-8 and 18). The motivation for combining Sugawara, Do, and Morita is the same as that recited for claim 1 above. In regards to claim 8, Sugawara in view of Do further in view of Morita teaches of the control system of claim 2, wherein the image recognition module includes an image determiner configured to classify the image and store the image as a data set and recognize the object in the image using an algorithm previously learned from the data set (“The stereo correspondence search represents processing to search the first image data MR and the second image data ML respectively for a pixel PXr and a pixel PXl where the same measurement point P is projected“ (Sugawara Para 0122) and “Additionally, in the following description, a state in which search for the corresponding pixel PXl has been successfully performed as a result of the search for the corresponding pixel PXl with respect to the focused pixel PXr will be suitably referred to as successful search. Additionally, a state in which the search for the corresponding pixel PXl has failed as a result of the search for the corresponding pixel PXl with respect to the focused pixel PXr will be suitably referred to as failed search” (Sugawara Para 0126), where the search being successful or failed is the classifying of the image, see also Sugawara Para 0127-0128, 0123-0125, and Figs 7-8; wherein Sugawara discloses of “The stereo measurement unit 102 applies stereoscopic image processing to the first image data MR and the second image data ML to generate disparity image data SG of the photographing subject SB. Specifically, the stereo measurement unit 102 executes stereo correspondence search on the first image data MR and the second image data ML. The stereo correspondence search represents processing to search the first image data MR and the second image data ML respectively for a pixel PXr and a pixel PXl where the same measurement point P is projected.” (Sugawara Para 0122), where it is noted that the measurement point P is a part of the photographing subject SB, as disclosed in Para 0118 and 0106. Para 0123 further discloses that “The stereo measurement unit 102 uses the first image data MR as base image data and the second image data ML as referential image data. As illustrated in FIG. 8, the stereo measurement unit 102 searches the second image data ML for a pixel PXl including a same projection point of the measurement point P with respect to the pixel PXr including a projection point of the measurement point P in the first image data MR. In the present embodiment, the stereo measurement unit 102 searches a plurality of pixels PXl existing on the epipolar line of the second image data ML for the pixel PMl including the projection point of the measurement point P.”, where this discloses that the first image is a base image that is used to make comparisons to the second image. The second image is searched for pixel PXl, which corresponds to the already determined point PXr in the first image data. Therefore, the search is completed by an algorithm that is affected by the first image before searching for the point within the second image, therefore the algorithm had learned from the first data set what point to search for in the second image, therefore the algorithm involved previously learning from a data set. Para 0125-0128 additionally support this). In regards to claim 9, Sugawara in view of Do further in view of Morita teaches of the control system of claim 8, wherein the image determiner extracts a three-dimensional structure ((“The stereo measurement unit 102 generates disparity image data SG by applying the stereoscopic image processing to two pieces of image data MR (MR1, MR2) and ML (ML1, ML2) of the photographing subject SB photographed by the two different cameras 30, and obtains three-dimensional data DG by arithmetic processing” (Para 0084), See also Fig 9 and Para 0203) or a guide line installed for manual surveying as a feature point from the data set (“The display control unit 108 causes the display device 26 to display the first image data MR1, second image data ML1, …. Additionally, the display control unit 108 causes the display device 26 to display the disparity image data SG generated by the stereo measurement unit 102. Furthermore, the display control unit 108 causes the display device 26 to display the three-dimensional data DG created by the map data creation unit 107” (Sugawara Para 0095), See also Sugawara Figs 7-8 and 18). In regards to claim 10, the claim recites analogous limitations to the combination of claims 1-2 and 8-9, and is therefore rejected on the same premise. In regards to claims 11-13 and 16-17, the claims recite analogous limitations to claims 1, 3-4, and 8-9, respectively, and are therefore rejected on the same premise. Claim(s) 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugawara in view of Do further in view of Morita, as applied to claim 1 above, and further in view of Gong (US 20210027071). In regards to claim 18, Sugawara in view of Do further in view of Morita teaches of the control system of claim 8. However, Sugawara in view of Do further in view of Morita do not specifically teach of wherein the algorithm includes a deep learning-based semantic segmentation technique. Gong, in the same field of endeavor, teaches of the algorithm includes a deep learning-based semantic segmentation technique (“Moreover, supervised deep learning operations may be performed for semantic segmentation by inputting RGB image data into a convolutional encoder/decoder that may include multiple stages of convolution, batch normalization (which does not only apply to segmentation but applies to other networks as well), ReLUs and pooling, followed multiple phases of convolution, batch normalization and ReLUs with upsampling. The resulting data may then be processed by application of a softmax function to provide output data with segmentation labelling for each pixel. Thus, disclosed systems and methodologies for transforming image data may preserve real-world information for successful application of the above-outlined techniques.” (Para 0054)). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the algorithm for recognizing an object in an image, as taught by Sugawara in view of Do in view of Morita, to include a deep-learning semantic segmentation technique, as taught by Gong, with a reasonable expectation of success in order to allow proper segmentation labeling for each pixel and preserve real-world information (Gong Para 0054). In regards to claim 19, the claim recites analogous limitations to claim 18, and is therefore rejected on the same premise. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lee et al. (US 20210019563) discloses of design data of a vehicle, including material information of a vehicle and of overlaying the design data with the vehicle data. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kyle J Kingsland whose telephone number is (571)272-3268. The examiner can normally be reached Mon-Fri 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached at (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE J KINGSLAND/Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Apr 07, 2022
Application Filed
Mar 08, 2024
Non-Final Rejection — §103
Jun 11, 2024
Response Filed
Jul 02, 2024
Final Rejection — §103
Sep 05, 2024
Interview Requested
Sep 23, 2024
Examiner Interview Summary
Sep 23, 2024
Applicant Interview (Telephonic)
Sep 27, 2024
Request for Continued Examination
Oct 06, 2024
Response after Non-Final Action
Dec 10, 2024
Non-Final Rejection — §103
Feb 27, 2025
Response Filed
Mar 13, 2025
Final Rejection — §103
May 19, 2025
Response after Non-Final Action
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Aug 26, 2025
Non-Final Rejection — §103
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 17, 2025
Examiner Interview Summary
Nov 21, 2025
Response Filed
Dec 15, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600240
METHOD FOR OPERATING A BRAKE CONTROL SYSTEM, BRAKE CONTROL SYSTEM, COMPUTER PROGRAM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12595699
VEHICLE INCLUDING A CAP THAT IS AUTOMATICALLY SEPARATED FROM A VEHICLE BODY
2y 5m to grant Granted Apr 07, 2026
Patent 12589784
SYSTEM AND METHOD FOR A VIRTUAL APPROACH SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12576727
DIFFERENTIAL ELECTRICAL DRIVE ARRANGEMENT FOR HEAVY DUTY VEHICLES
2y 5m to grant Granted Mar 17, 2026
Patent 12570246
MULTI-STANCE AERIAL DEVICE CONTROL AND DISPLAY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
77%
Grant Probability
84%
With Interview (+6.5%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 212 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month