Prosecution Insights
Last updated: April 19, 2026
Application No. 18/565,467

SENSING DEVICE AND VEHICLE CONTROL DEVICE

Final Rejection §103
Filed
Nov 29, 2023
Examiner
LEWANDROSKI, SARA J
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hitachi Astemo, Ltd.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
470 granted / 582 resolved
+28.8% vs TC avg
Moderate +10% lift
Without
With
+9.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
622
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 582 resolved cases

Office Action

§103
DETAILED ACTION This Final Office Action is in response to amendments filed 12/15/2025. Claims 1-11 have been amended. Claims 1-11 are pending. Response to Arguments Claim Interpretation The limitations interpreted as 35 U.S.C. 112(f) limitations have been removed in the amendments filed 12/15/2025. Claim Objections Due to the amendments filed 12/15/2025, the objection of claim 10 has been withdrawn. Rejections under 35 U.S.C. 101 Due to the amendments filed 12/15/2025, the rejections of claims 1-11 under 35 U.S.C. 101 have been withdrawn. Rejections under 35 U.S.C. 112 Due to the amendments filed 12/15/2025, the rejections of claims 1-11 under 35 U.S.C. 112(b) have been withdrawn. Rejections under 35 U.S.C. 102 and 103 Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Specifically, upon further search of the amendments filed 12/15/2025, new references are applied in the rejections below. Key to Interpreting this Office Action For readability, all claim language has been underlined. Citations from prior art are provided at the end of each limitation in parentheses. Any further explanations that were deemed necessary by the Examiner are provided at the end of each claim limitation. Claim Objections Claims 3, 4, and 11 are objected to because of the following informalities: Claim 3 recites the limitation of the point group information of the road surface; however, claim 1, from which claim 3 depends, simply recites “point group information.” It is unclear if the “point group information of the road surface” of claim 3 is intended to be distinct from the “point group information” of claim 1. Claim 4 is objected to for similar reasons. Claim 11 recites the limitation of a vehicle is controlled by using based on the estimated relative posture (emphasis added) in the last line of claim 11. In light of the amendments, the limitation of “by using” should be removed. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5, 6, and 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Torikura et al. (US 2019/0362160 A1), hereinafter Torikura, in view of Adachi et al. (US 5,058,017), hereinafter Adachi. Claim 1 Torikura discloses the claimed system (see Figure 1) comprising: a sensing device comprising: a first set of sensors configured to obtain first data corresponding to a first region in a periphery of a host vehicle from information of a first common image pickup region (see ¶0019, with respect to Figure 2, regarding imaging areas of cameras 20a to 20d are used for capturing images around vehicle 2, indicated by dotted lines in Figure 2, with respective imaging areas that include overlap areas 3, 4, 5, and 6); a second set of sensors configured to obtain second data corresponding to a second region different from the first region from information of a second common image pickup region (see ¶0019, with respect to Figure 2, regarding imaging areas of cameras 20a to 20d are used for capturing images around vehicle 2, indicated by dotted lines in Figure 2, with respective imaging areas that include overlap areas 3, 4, 5, and 6). Various combinations of “sets of sensors” may be reasonably taught by Torikura. Specifically, Torikura may be applied to the different “set of sensors” depending on the estimation of pitch or roll angle, as described in ¶0035, with respect to the example in Figure 5. For example, in estimating a pitch angle, the “first region of a first common pickup region” may be taught by overlap area 3 or 4, such that the “first set of sensors” are taught by sensors 20a and 20c or 20a and 20d in Figure 2, and the “second region of a second common pickup region” may be taught by overlap area 5 or 6, such that the “second set of sensors are taught by sensors 20b and 20c or 20b and 20d in Figure 2. Torikura further discloses that the sensing device is configured to: integrate a geometric relationship representing a predefined positional and postural relationship between sensors in the first set of sensors and the second set of sensors with coordinates of the first data obtained observed in the first region and the second data obtained in the second region to generate a set of integrated coordinates (see ¶0024, regarding that images captured by cameras 20a to 20d are converted into viewpoints using conversation data set based on camera parameters obtained by digitalizing the mounting positions of cameras 20a to 20d on vehicle 2 and mounting angles of cameras 20a to 20d in triaxial directions, i.e. longitudinal, lateral, and vertical directions of vehicle 2, where the mounting positions and attitudes of cameras 20a to 20d are preset, as described in ¶0020); and estimate a relative posture between each sensor in the first set of sensors and the second set of sensors and a road surface including a pitch angle and a roll angle of the host vehicle based on point group information calculated from the set of integrated coordinates (see ¶0030-0031, with respect to step S112 of Figure 3, regarding estimating a vehicle turning angle, defined as an attitude of vehicle 2, including a pitch angle and roll angle, based on positions of the feature points extracted in steps S104 to S110, described in ¶0026-0029 as at least one feature point from each bird’s-eye-view images obtained by converting the plurality of images into viewpoints; ¶0032-0035, with respect to the example in Figure 5, regarding the estimation of pitch angle and roll angle based on the amounts of offset between feature points in different overlap areas). While Torikura discloses that the inventive turning angle estimation device is an alternative to the gyro sensor for detecting an attitude of a vehicle (see ¶0049), Torikura does not disclose the particular use of the estimated vehicle attitude, such that the system of Torikura is for controlling a vehicle according to road surface gradient and includes a vehicle control device configured to control at least one of a steering device, a driving device, a braking device, and an active suspension of the host vehicle based on the relative posture between each sensor in the first set of sensors and the second set of sensors and the road surface. However, vehicle control commonly uses estimated vehicle parameters such as “relative posture,” defined as a “pitch angle and roll angle of the host vehicle,” in light of Adachi. Specifically, Adachi teaches vehicle 1 (similar to the host vehicle taught by Torikura) as provided with a pitch angle sensor 2 and roll angle sensor 3, defined as a gyroscope in col. 3, lines 1-6 (similar to gyro sensor 21, defined as the alternative embodiment of the first set of sensors and second set of sensors of Torikura in ¶0051). Adachi further teaches a vehicle control device configured to control an active suspension of vehicle 1 based on pitching and rolling angles of the vehicle (similar to the relative posture between each sensor in the first set of sensors and the second set of sensors and the road surface of Torikura, defined as “including a pitch angle and a roll angle of the host vehicle” in claim 1) (see col. 1, line 46-col. 2, line 7); therefore, the invention of Adachi is for controlling a vehicle according to a road surface gradient, as depicted in Figures 3A and 3B, with respective control operations, described in col. 4, line 38-col. 5, line 55. While the “relative posture” of Torikura is estimated using images from external cameras of the vehicle, Torikura further discloses that a gyro sensor may alternatively output the “relative posture” in at least ¶0051; therefore, it would be reasonable to use the “relative posture” of Torikura in the active suspension control of Adachi, in light of the known impairment of gyro sensors under particular conditions described in ¶0045 of Torikura. Since the systems of Torikura and Adachi are directed to the same purpose, i.e. determining pitch and roll angles of a vehicle, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Torikura to be for controlling a vehicle according to road surface gradient, so as to include a vehicle control device configured to control at least one of a steering device, a driving device, a braking device, and an active suspension of the host vehicle based on the relative posture between each sensor in the first set of sensors and the second set of sensors and the road surface, in light of Adachi, with the predictable result of improving the feeling of the ride and handling of the vehicle efficiently (col. 1, lines 46-51 of Adachi). Claim 5 Torikura further discloses that the sensing device is further configured to measure a distance while referring to a corrected value calculated from the estimated relative posture (see ¶0035, regarding that the turning angle estimation device 1 compares the amount of offset between feature points in the overlap areas for estimating the pitch and roll angles, where the process is iterated at predetermined control cycle, as described in ¶0025, and positional offset is corrected between feature points in step S100 that is attributed to the offset between camera parameters from the actual mounting positions and attitudes of cameras 20a and 20b and the imbalance of the center of gravity of vehicle 2, as described in ¶0027). The limitation of “distance” is not defined with respect to a particular dimension or surface; therefore, the offset between detected feature points of Torikura may reasonably be applied to teach a “distance.” The measured “distance” is not used in any claimed operations. Claim 6 Torikura further discloses that the sensing device is further configured to separate road surface point group information and object point group information representing an object present on a road surface (see ¶0032-0035, with respect to Figure 5, regarding that the division lines of the road drawn along the advancing direction are detected as feature points of the overlap areas). Claim 8 Torikura further discloses that the sensing device is further configured to integrate, in time series, outputs associated with the first region, and integrate, in time series, outputs associated with the second region (see ¶0036-0038, with respect to step S114 of Figure 3, regarding that an amount of time-series information regarding vehicle turning angle is accumulated, where the vehicle turning angle is estimated based on overlap areas, as described in ¶0026-0031, with respect to Figure 3). The limitations of “outputs associated with the first region” and “outputs associated with the second region” may be broadly interpreted as values estimated based on the regions, such as the estimated turning angle based on the overlap areas of Torikura. The integrated “outputs” are not used in any claimed operations. Claim 9 Torikura further discloses that the first set of sensors is configured to obtain information on the first region comprising a left side in a traveling direction of the host vehicle, and the second set of sensors is configured to obtain information on the second region comprising a right side in the traveling direction of the host vehicle (see ¶0019, with respect to Figure 2, regarding imaging areas of cameras 20a to 20d are used for capturing images around vehicle 2, indicated by dotted lines in Figure 2, with respective imaging areas that include overlap areas 3, 4, 5, and 6). As discussed in the rejection of claim 1, Torikura may be applied to the different “set of sensors” depending on the estimation of pitch or roll angle, as described in ¶0035, with respect to the example in Figure 5. In this case, the estimation of a roll angle of Torikura is applied to teach the “first set of sensors” as sensors that correspond to overlap areas 3 or 5, depicted as a “left side in a traveling direction of the host vehicle” in Figure 2, and the “second set of sensors” as sensors that correspond to overlap areas 4 or 6, depicted as a “right side in the traveling direction of the host vehicle” in Figure 2. Claim 10 Torikura further discloses that the sensing device is further configured to: receive pieces of information acquired by six sensors (see ¶0019-0021, with respect to Figures 1 and 2, regarding that cameras 20a to 20d capture images around the vehicle, and gyro sensor 21 detects a turning angular velocity of vehicle 2, and temperature sensor 22 measures an ambient temperature of vehicle 2), and estimate a relative posture between each sensor and a road surface including a pitch angle of the host vehicle and a roll angle of the host vehicle based on point group information calculated by integrating coordinates of information observed in a common image pickup region of a combination of two sensors of the six sensors (see ¶0030-0031, with respect to step S112 of Figure 3, regarding estimating a vehicle turning angle, defined as an attitude of vehicle 2, including a pitch angle and roll angle, based on positions of the feature points extracted in steps S104 to S110, described in ¶0026-0029 as at least one feature point from each bird’s-eye-view images obtained by converting the plurality of images into viewpoints, images captured by cameras 20a to 20d are converted into viewpoints using conversation data set based on camera parameters obtained by digitalizing the mounting positions of cameras 20a to 20d on vehicle 2 and mounting angles of cameras 20a to 20d in triaxial directions, i.e. longitudinal, lateral, and vertical directions of vehicle 2, as described in ¶0024, and the mounting positions and attitudes of cameras 20a to 20d are preset, as described in ¶0020). As discussed in ¶0028-0030 of Torikura, a particular overlap area (i.e. “common image pickup region”) requires two cameras, e.g., overlap area 3 is defined by images captured from camera 20a and 20c. Claim 11 The combination of Torikura and Adachi teaches the claimed vehicle control device configured to perform the steps discussed in the rejection of claim 1. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Torikura in view of Adachi, and in further view of Mueller (US 2014/0300738 A1), hereinafter Mueller. Claim 2 While Torikura further discloses that the imaging areas of cameras 20a to 20d include overlap areas 3, 4, 5, and 6 (see ¶0019, with respect to Figure 2), the position of the optical axes of the cameras 20a to 20d of Torikura are not disclosed, and therefore, Torikura does not further disclose that the first common image pickup region is disposed at a position where optical axes of the first set of sensors intersect with each other and the second common image pickup region is disposed at a position where the optical axes of the second set of sensors intersect with each other. However, Torikura further discloses modifications may be applied, as long as the overlap areas are formed (see ¶0054), and therefore, it would be reasonable to modify the arrangement of cameras of Torikura, such that the cameras used to image overlapping areas are positioned with optical axes that intersect one another, in light of Mueller. Specifically, Mueller teaches a known arrangement of cameras (similar to the first set of sensors or second set of sensors taught by Torikura) for imaging an outside surrounding area (similar to the first common image pickup region or second common image pickup region taught by Torikura) of the vehicle where optical axes of the cameras intersect with each other (see ¶0011-0013). While the particular overlap regions imaged by the sets of sensors taught by Torikura are not taught by Mueller, it is the known arrangement of similar vehicle sensors in which their optical axes intersect to image an overlapping region that is modified by Mueller; therefore, the particular quantity of sensors or regions imaged by the sensors do not influence this combination. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the configuration of sensors of Torikura, such that the first common image pickup region is disposed at a position where optical axes of the first set of sensors intersect with each other and the second common image pickup region is disposed at a position where the optical axes of the second set of sensors intersect with each other, in light of Mueller, with the predictable result of being “obvious to try” - choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success. Specifically, given that only the overlap areas are used to estimate vehicle posture in Torikura (see ¶0035) and Torikura supports modifications of the system as long as the overlap areas are formed (see ¶0054), one of ordinary skill in the art would have reasonably selected an orientation of the “set of sensors” in which their optical axes intersect, which would generate the same results as an intersection of their field of views, as depicted in Figure 2 of Torikura. Claims 3, 4, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Torikura in view of Adachi, and in further view of Tsunashima (US 2020/0320728 A1), hereinafter Tsunashima. Claim 3 Torikura relies on a bird’s-eye-view conversion and the detection of lines on a road surface (see ¶0026) and thus does not further disclose that the sensing device is further configured to collate the point group information of the road surface with a road surface model. However, the collated “point group information” is not used in any claimed operations; therefore, it would be obvious to incorporate the known technique of collating similar point group information with a road surface model, in light of Tsunashima. Specifically, Tsunashima teaches cameras 521 arranged on the front, rear, and left and right sides of vehicle 511 as part of imaging control system 501, described in ¶0237, with respect to Figure 25 (similar to the sensing device taught by Torikura), collates observation points (similar to the point group information taught by Torikura) with a road surface model (see ¶0218, regarding that the road surface 551 on which the vehicle travels is used as a basis plane, where the distances to observation points are calculated with respect to the road surface 551). In Torikura, cameras installed on an exterior surface of a vehicle acquire images of respective common regions of a road surface in order to estimate vehicle attitude. In Tsunashima, cameras installed on an exterior surface of a vehicle acquire images of respective common regions of a road surface in order to measure distance. However, it is the technique collating points obtained from cameras that image a common region of a road surface with a road surface model that is modified by Tsunashima; therefore, subsequent image processing does not influence this combination. Since the systems of Torikura and Tsunashima are directed to the same purpose, i.e. providing multiple cameras on a vehicle with overlapping views, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensing device of Torikura to further collate the point group information of the road surface with a road surface model, in the same manner that the observation points of Tsunashima are referenced to a basis plane defined as the road surface, with the predictable result of applying a coordinate system unaffected by the angle of the road surface on which the vehicle travels (¶0218 of Tsunashima). Claim 4 Tsunashima further teaches cameras 521 arranged on the front, rear, and left and right sides of vehicle 511 as part of imaging control system 501, described in ¶0237, with respect to Figure 25 (similar to the sensing device taught by Torikura), is further configured to fit observation points (similar to the point group information taught by Torikura) to the road surface model such that an error between the observation points (similar to the point group information taught by Torikura) and the road surface model decreases (see ¶0194-0196, with respect to Figure 19, regarding the coordinate system of the stereo system is defined with respect to the road surface 551, such that target points to be captured are provided on road surface 551). The basis plane (or road surface 551) acts as a geometric constraint for the observation points captured by the cameras of Tsunashima and thus inherently decrease an error between the observation points and the basis plane. Claim 7 Torikura relies on a bird’s-eye-view conversion and the detection of lines on a road surface (see ¶0026) and thus does not further disclose that the sensing device is further configured to perform collation with a road surface model such that an error between the road surface point group information and a road surface decreases, and the object point group information is vertically arranged. However, the “collation” does not influence any claimed operations; therefore, it would be obvious to incorporate the known technique of performing collation with a road surface model such that an error between similar road point group information and a road surface decreases, where similar object point group information is vertically arranged, in light of Tsunashima. Specifically, Tsunashima teaches cameras 521 arranged on the front, rear, and left and right sides of vehicle 511 as part of imaging control system 501, described in ¶0237, with respect to Figure 25 (similar to the sensing device taught by Torikura), is configured to perform collation with a road surface model such that an error between target points, defined as a white line on road surface 551 in ¶0230 (similar to the road surface point group information taught by Torikura) and a road surface decreases (see ¶0194-0196, with respect to Figure 19, regarding the coordinate system of the stereo system is defined with respect to the road surface 551, such that target points to be captured are provided on road surface 551). The basis plane (or road surface 551) acts as a geometric constraint for the observation points captured by the cameras of Tsunashima and thus inherently decrease an error between the observation points and the basis plane. Torikura further discloses that the object point group information is vertically arranged (see Figure 5, depicting the distinction of the “object point group information” from the white division lines of the road in a “vertical” arrangement in the bird’s-eye-view images). The limitation of “vertically arranged” is not defined with respect to a particular feature and may be broadly interpreted as a vertical arrangement in captured images. The “object point group information” is not used in any claimed operations. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensing device of Torikura to be further configured to perform collation with a road surface model such that an error between the road surface point group information and a road surface decreases, in the same manner that the observation points of Tsunashima are referenced to a basis plane defined as the road surface, with the predictable result of applying a coordinate system unaffected by the angle of the road surface on which the vehicle travels (¶0218 of Tsunashima). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Aharony et al. (US 2016/0046290 A1) teaches estimating pitch and roll rates of a vehicle by tracking a set of feature points in one or more images acquired from a plurality of image capturing devices (see ¶0112), Iida (US 2017/0206425 A1) teaches the estimation of a road surface inclination based on a temporal change in a vehicle peripheral image captured by a camera (see ¶0090-0091), Inoue et al. (US 2018/0165833 A1) teaches mapping lines extracted from images captured by a stereo camera to a three-dimensional coordinate space (see abstract), Soda et al. (translation of JP 2008-309519 A) teaches detecting pitch and roll angles of a camera while a vehicle travels based on captured images of an object (see ¶0022-0023), Rathi et al. (US 2014/0247352 A1) teaches the determination of a change in distance to an object exterior to a vehicle which is indicative of a pitch or roll of the vehicle (see claim 9) using a plurality of cameras installed around the periphery of the vehicle (see ¶0024), and Wang et al. (US 2015/0332098 A1) teaches estimating a vehicle dynamics parameter that includes a pitch and roll of the vehicle using matching feature points in images from cameras (see ¶0032). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sara J Lewandroski whose telephone number is (571)270-7766. The examiner can normally be reached Monday-Friday, 9 am-5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya P Burgess can be reached at (571)272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARA J LEWANDROSKI/Examiner, Art Unit 3661 /RAMYA P BURGESS/Supervisory Patent Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Sep 11, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Feb 28, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600245
POWER CONTROL APPARATUS FOR VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12600371
CONTROL DEVICE, CONTROL METHOD AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596519
AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12576987
COMPUTER-BASED SYSTEMS AND METHODS FOR FACILITATING AIRCRAFT APPROACH
2y 5m to grant Granted Mar 17, 2026
Patent 12571180
CONTROLLING AN EXCAVATION OPERATION BASED ON LOAD SENSING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
91%
With Interview (+9.9%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 582 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month