Prosecution Insights
Last updated: April 19, 2026
Application No. 18/552,722

OBJECT DISTANCE DETECTING DEVICE

Final Rejection §102§103
Filed
Sep 27, 2023
Examiner
LEMIEUX, IAN L
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Hitachi Astemo, Ltd.
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
496 granted / 569 resolved
+25.2% vs TC avg
Moderate +10% lift
Without
With
+9.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed 10/31/2025 in response to the Non-Final Office Action mailed 09/24/2025 has been entered. Claims 1-3 and 5-7 are currently pending in U.S. Patent Application No. 18/552,722 and an Office action on the merits follows. Claim Interpretation and 35 USC § 112(f) Invocation Applicant’s amendment striking those instances of nonce terminology/‘means’ alternative(s) ‘___ section’ considered at prong A of the three prong test set forth in MPEP 2181, in order to more clearly avoid any invocation of 35 USC 112(f), is acknowledged. Claim interpretation follows that guidance as presented in MPEP 2173.01 and MPEP 2111.01 (see flow chart therein), which may still warrant an interpretation derived from Applicant’s Specification for any instances wherein the terminology/language relied upon does not have a known/recognized ‘ordinary and customary’ meaning in the art. Response to 35 USC § 101 Rejections Applicant’s remarks (filed 10/31/2025) at page 7 concerning subject matter eligibility analysis at Prong Two of Step 2A are determined persuasive and corresponding rejections to the claims are withdrawn accordingly. More specifically Applicant identifies that baseline direction selection limitation as an ‘additional element’ that realizes Applicant’s improvement (MPEP 2106.05(a)) to the ‘technical field’ identified in Applicant’s remarks at page 8 (i.e. “the field of object distance detecting devices”) as distinguished from, as previously asserted by the Examiner, an ‘additional element’ that at most generally links the exception to an environment/ field of use wherein two or more stereo camera configurations/baselines are utilized. Examiner respectfully disagrees with Applicant’s Prong One analysis, as the mere presence of ‘additional elements’ does not serve to prevent a Prong One determination/finding that the claim ‘recites’ an exception – as the most recent Examples from the 2024 PEG make clear. Examiner also disagrees with any assertion that the exception that is a distance determination/calculation, is ‘used in some other meaningful way’ as recited in the claim(s) – as the representative/ independent claim does not recite using the calculated distance in any manner whatsoever, and instead rests with calculating/determining said distance. Response to Arguments/Remarks Applicant's arguments/remarks regarding Urushido as applied have been fully considered but they are not persuasive. Applicant’s remarks assert that Urushido fails to fairly disclose that ‘region specifying’ and subsequent limitations based thereon, since Urushido’s Figure 17 features an edge detection that occurs after a depth estimation. Applicant’s remarks at least implicitly, further appear to assert that an initial depth estimation at Pr2 is the only distance detection equivalent applicable in Urushido. Examiner disagrees, and notes Urushido features additional disclosure that makes clear the edge detection of Pr3, even for instances involving an initial depth estimation (Pr2 – Fig. 9 suggests depth information from Pr2 need not be a basis for Pr3 edge detection, and that the two operations are optionally performed in parallel), still occurs prior to (as it is required for) precedes the map voting probability calculation of Pr4 and subsequent object detection operations of 23 as performed in/after Pr4/Pr5. Furthermore, that distance detection equivalent of Urushido, need not be drawn to any initial depth estimation that may occur prior to or in conjunction with an initial specifying at Pr2/Pr3 – but may instead be for the case of Urushido as applied a distance determination as performed in the calculation of distance to an object estimated on the basis of a parallax of the object (Urushido [0047-0049]). In other words, Urushido as previously applied did not draw that final distance calculation step to Pr2, but instead to the disclosure of [0047-0049] (operations of object detection unit 23 are distinct from those of 22), which features a distance determination that is based on a parallax of the object as seen from those imaging units selected for the more accurate/ideal baseline (Urushido [0048]). As understood by the Examiner Urushido evidences the obvious nature of a baseline selection so as to ultimately derive a more accurate object detection/distance determination, and more specifically one that is on the basis of a parallax image. Even if Urushido failed to perform a region specifying equivalent as a preprocessing step, modification in that respect would be obvious to POSITA as it may serve to reduce processing as otherwise applied to additional image portions not concerning a select/prioritized object of interest, and similar rationale was previously presented in that combination relied upon in the rejection of claim 6 – see page 18 of the Non-Final. For at least these reasons Examiner does not find the claims as amended distinguished over the prior art, and as evidenced at least by Urushido, POSITA would recognize that in certain circumstances, select baselines may serve in both a more accurate parallax/disparity image determination and resultant depth/distance determination(s) given the known relationship(s) therebetween. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 1. Claims 1 and 3-5 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Urushido et al. (US 2022/031400 A1). As to claim 1, Urushido discloses an object distance detecting device which detects a distance to a target object (Abs “an object detection unit that detects the object based on the depth estimated by the depth estimation unit and reliability of the depth”) around a vehicle ([0052] “Note that, in the present embodiment, a description will be given of an example where the information processing apparatus 1 is mounted on the unmanned moving body. However, besides this, the information processing apparatus 1 may be mounted on an autonomous mobile robot, a vehicle, a portable terminal, or the like”), the object distance detecting device comprising at least three imaging cameras which image a same target object (Figures 6-8, apparatus 1 comprises 10 further comprising three imaging sections 10a, 10b, and 10c, Fig. 18, [0052] “As illustrated in FIG. 6, the information processing apparatus 1 includes three imaging units (cameras) 10a, 10b, and 10c, a control unit 20, and a storage unit 30”, [0054] “As illustrated in FIG. 7 for example, the stereo camera system 10 is a trinocular camera system including three imaging units 10a, 10b, and 10c. The stereo camera system 10 is attached to, for example, a lower portion of the unmanned moving body with a support member 12 interposed therebetween. The imaging units 10a, 10b, and 10c are arranged in a V shape. That is, the first and second stereo cameras 11a and 11b are arranged so that a direction of a baseline length of the first stereo camera 11a and a direction of a baseline length of the second stereo camera 11b are perpendicular to each other”, etc.,), wherein the object detecting device is configured to: specify a region, in which the target object exists, on a basis of an image acquired from at least one of the imaging cameras (detection section 22, Fig. 12, processing steps Pr2-Pr3, [0059] “The edge detection unit 22 detects the edge of the object from a monocular image (RGB image) captured by any of the imaging units 10a, 10b and 10c, and generates an edge image (see FIG. 10 to be described later)”; Examiner identifies edge regions corresponding to one or more objects as detected by 22 to be those most equivalent to the specified regions, since an edge is detected prior to and for that processing of Pr4 and Pr5 – see also the Response to Remarks above); select one base line direction among a plurality of base line directions defined by any two imaging sections among the at least three imaging cameras on a basis of an image of the region specified, and select an image of the region acquired from each of the two imaging cameras defining the selected base line direction ([0080] “For example, a case is considered where edges in the horizontal direction and edges in the vertical direction are detected by the edge detection unit 22, for example, as illustrated in FIG. 12. In this case, a second probability distribution corresponding to the first stereo camera 11a in which the direction of the baseline length is the horizontal direction as illustrated in FIG. 13 becomes such a distribution in which the highest probability overlaps the edges in the vertical direction perpendicular to the baseline length as illustrated in FIG. 14. Meanwhile, a second probability distribution corresponding to the second stereo camera 11b in which the direction of the baseline length is the vertical direction as illustrated in FIG. 15 becomes such a distribution in which the highest probability overlaps the edges in the horizontal direction perpendicular to the baseline length as illustrated in FIG. 16”, [0081-0082]; Examiner notes the ‘selected’ images are those corresponding to the baseline, i.e. that of either 11a, or 11b, associated with the highest reliability (based on the object(s), edge lines, and corresponding edge line angle of directions); [0008] “the reliability being determined in accordance with an angle of a direction of an edge line of the object with respect to the directions of the baseline lengths of the plurality of stereo cameras”, [0006] “In the object detection using the stereo camera, for example, on the basis of a parallax of an object seen from right and left cameras, a distance between the camera and the object is measured. However, when the object as a measuring target extends in a direction of a baseline length of the stereo camera, there is a problem that it is difficult to measure the distance”); detect a distance to the target object existing in the region on a basis of the image selected by the image selection section ([0047] “In object detection using such a stereo camera system 110 as described above, by using a method such as triangulation for example, a distance to an object (hereinafter, the distance will be referred to as a "depth") is estimated on the basis of a parallax of the object seen from the left and right imaging units 110a and 110b”, [0048-0049], etc.,); generate a parallax image of the region from the image selected (Urushido determining depth reliability/probability distributions (parallax image equivalents) for both 11a (horizontal baseline, Fig. 13, cameras 10a and 10b), and 11b (vertical baseline, Fig. 15 (cameras 10b and 10c)); see e.g. Fig. 16 and voting for ‘second stereo camera’ (11b) of Fig. 15; [0079-0080]), and detect the distance to the target object on a basis of the parallax image ([0047] “In object detection using such a stereo camera system 110 as described above, by using a method such as triangulation for example, a distance to an object (hereinafter, the distance will be referred to as a "depth") is estimated on the basis of a parallax of the object seen from the left and right imaging units 110a and 110b”, [0048-0049], [0058] “From the captured images captured by the first and second stereo cameras 11a and 11b, the depth estimation unit 21 estimates a depth of the object included in the captured images. On the basis of a parallax of the object seen from the imaging units 10a and 10b and a parallax of the object seen from the imaging units 10b and 10c, the depth estimation unit 21 estimates the depth by using, for example, a known method such as triangulation”, etc.,). As to claim 3, Urushido discloses the device of claim 1. Urushido further discloses the device configured to: generate a plurality of parallax images from the image acquired from each of the at least three imaging cameras, select the parallax image generated from the image acquired from each of the two imaging cameras defining the selected base line direction (Urushido determining depth reliability/probability distributions (parallax image equivalents) for both 11a (horizontal baseline, Fig. 13, cameras 10a and 10b), and 11b (vertical baseline, Fig. 15 (cameras 10b and 10c)); see e.g. Fig. 16 and voting for ‘second stereo camera’ (11b) of Fig. 15; [0079-0080]), and detect the distance to the target object on a basis of the parallax image selected by the image selection section ([0047-0049], [0058] “From the captured images captured by the first and second stereo cameras 11a and 11b, the depth estimation unit 21 estimates a depth of the object included in the captured images. On the basis of a parallax of the object seen from the imaging units 10a and 10b and a parallax of the object seen from the imaging units 10b and 10c, the depth estimation unit 21 estimates the depth by using, for example, a known method such as triangulation”, etc.,). As to claim 5, Urushido discloses the device of claim 1. Urushido further discloses the device configured to: acquire vehicle information of at least one of motion state information or position information of the vehicle (Fig. 18 control 20A receives information from e.g. IMU 40, [0093] “Moreover, a control unit 20A of the information processing apparatus 1A includes a position/attitude estimation unit 24 in addition to the respective constituents of the above-described control unit 20”, Fig. 22 S11, [0094] “The inertial measurement unit 40 is composed of an inertial measurement unit (IMU) including, for example, a three-axis acceleration sensor, a three-axis gyro sensor, and the like, and outputs acquired sensor information to the position/attitude estimation unit 24 of the control unit 20A. The position/attitude estimation unit 24 detects a position and attitude (for example, an orientation, an inclination, and the like) of an unmanned moving body, on which the information processing apparatus 1A is mounted, on the basis of the captured images captured by the imaging units 10a, 10b, and 10c and the sensor information input from the inertial measurement unit 40. Note that a method for detecting the position and attitude of the unmanned moving body is not limited to a method using the above-described IMU”, [0100] “First, the position/attitude estimation unit 24 of the control unit 20A estimates the position and attitude of the subject machine (Step S11)”) and, obtain a spatial frequency component in a vertical direction and a spatial frequency component in a horizontal direction of image data of the region by weighting based on the vehicle information (see claims above, Urushido Fig. 21, [0095-0096], [0097] “Referring to FIG. 21, a description will be given below of an example of the deformation of the second probability distribution, in which the changes of the position and attitude of the subject machine are considered. When the second probability distribution is approximated by a two-dimensional normal distribution of a periphery of the edge as illustrated in FIG. 21 for example, this normal distribution can be represented by values of an x average, a y average, an edge horizontal dispersion, an edge vertical dispersion, an inclination, a size of the entire distribution, and the like. These values are changed in accordance with an angle of the edge and the variations of the position and attitude of the subject machine”, [0101]), and select the base line direction on a basis of the spatial frequency component in the vertical direction and the spatial frequency component in the horizontal direction obtained by weighting (Fig. 22 S12-13, [0101], etc.,). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Urushido et al. (US 2022/031400 A1) in view of Ha (US 2006/0177124 A1). As to claim 2, Urushido discloses the device of claim 1. Urushido further discloses the device as configured to: obtain a spatial frequency component (Urushido’s the “extending direction of the object”) in a vertical direction (Fig. 2, wherein extending direction of the object exists in the vertical direction, e.g. for an object such as Fig. 5, wherein the tall building is characterized by a primarily vertical direction of spatial components/edges/features) and a spatial frequency component (“extending direction of the object”) in a horizontal direction of the image of the region (Fig. 3, e.g. that instance of Fig. 4 wherein the electrical wires have an extending direction/ more/a higher frequency/count of components in the horizontal direction; these interpretations are not inconsistent with Applicant’s disclosure e.g. pgpub [0038-0039] – it should be noted that the “extending direction of the object” is e.g. that axis in which the object has longest/most edges – which corresponds to a greater number of high frequency components in the direction that is perpendicular/orthogonal to such an edge/line – to illustrate Urushido Fig. 16 is understood to illustrate an extending direction that is primarily horizontal (and has a greater number of high frequency components in the vertical direction accordingly) and results in selecting for a vertical baseline (11b Fig. 15) accordingly), and select the base line direction on a basis of the obtained spatial frequency component in the vertical direction and the obtained spatial frequency component in the horizontal direction (Urushido selects for a baseline that is most orthogonal relative to the extending direction of a target object, since such a baseline enables more accurate disparity (and depth accordingly given their relationship) determination(s)) – having a higher “reliability of the depth”, [0080], [0006], [0047] “In this depth estimation, when a direction of a baseline length that indicates a distance between the center of the imaging unit 110a and the center of the imaging unit 110b and an extending direction of the object as a measuring target are not parallel to each other but intersect each other as illustrated in FIG. 2 for example, the depth can be estimated appropriately since it is easy to grasp a correlation between such an object reflected in the video of the right camera and such an object reflected in the video of the left camera”, [0048], etc.). Ha further evidences the obvious nature of determining line/edge directions on the basis of measuring directivity (horizontal vs vertical) of high frequency components (Fig. 6 620, 630, Fig. 9 S920, [0011] “The measuring the directivity of the high frequency components comprises measuring high frequency components of the selected image to determine if the selected image has more high frequency components in the horizontal direction or the vertical direction”, Figures 5A, high frequency in horizontal direction indicative of vertically oriented lines/edges, Fig. 5B high frequency in vertical direction corresponding to horizontally oriented lines; While Ha utilizes measuring directivity of frequency components to determine which components are more prevalent and thereby minimize loss of image quality for any subsequent compression/decompression, Ha evidences the manner in which POSITA would look to such a measuring, with a reasonable expectation of success, when attempting to solve that same problem of determining the a dominant orientation for edges of one or more objects, and Ha is analogous art accordingly (see MPEP 2141.01(a) and at least reasonable-pertinence theory)). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Urushido to further comprise measuring a directivity of high frequency components when determining that ‘extending direction of the object’ as taught/suggested by Ha, the motivation as similarly taught/suggested therein such a means for determining the extending direction of the object would constitute readily implemented filtering operations robust to noise and efficiently performed in a frequency domain, while characterized by a reasonable expectation of success. As to claim 7, Urushido in view of Ha teaches/suggests the device of claim 2. Urushido in view of Ha further suggests the device configured to select the vertical direction as the base line direction when a number of the spatial frequency components in the vertical direction obtained by the target object region specifying section is large, and selects the horizontal direction as the base line direction when a number of the spatial frequency components in the horizontal direction is large (Urushido as modified selects the same by consequence of that selection identified above for the case of claim 2 – in other words, sharp edges are characterized by high frequency components (higher the frequency, sharper the edge/transition) and high ‘horizontal frequency’ corresponds to sharp vertical edges (a rapid transition when traversing horizontally →, signals a strong vertical edge |), e.g. Fig. 16 has longest/more edges/lines that are horizontal, and so a higher/‘large’ number of ‘vertical’ high frequency components, and the vertical baseline is selected; while large is arguably Relative Terminology as identified in MPEP 2173.05(b), the instant claims are not understood to be indefinite because POSITA would understand the abovementioned relationships, Clearone, Inc. v. Shure Acquisition Holdings, Inc., 35 F.4th 1345, 1349, 2022 USPQ2d 509 (Fed. Cir. 2022) (similar to the manner in which how 202 operates specifically need not be disclosed in the Specification) see also Ha, Figures 5A and 5B and that modification/motivation as proposed for the case of claim 2 above). 2. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Urushido et al. (US 2022/031400 A1) in view of Fujiwara (US 2014/0247344 A1). As to claim 6, Urushido discloses the device of claim 1. Urushido suggests the device as configured to in a case where there are a plurality of regions where a target object exists, select the base line direction for one or more subsets of the regions (Urushido at least suggests selecting a baseline on the basis of target object portions, that may be characterized by a different spatial frequency component/extending direction relative to other object portions – e.g. while the building as a whole may be characterized by a vertical extending direction (and a large number of high frequency components in a horizontal direction), if the vehicle is primarily concerned with just that ‘top portion of a building’ B of Fig. 5 ([0048]), the target object portion may be characterized by an extending direction that is horizontal (as vehicle is clearing the building and a corresponding FOV concerns primarily region B, but does not image/capture the lower half of the building as the vehicle/UAV approaches and is close to the building top portion)). Urushido however fails to disclose selecting multiple and different baselines, for different portions of any same object – stated differently Urushido appears to only explicitly disclose one target object at a time, and if considering an object portion as the target object, does not then additionally explicitly determine a baseline best suited for object portions that are not the target object portion. Urushido however appears modifiable in this respect, in view of a motivation to consider a plurality of objects in a scene at a time and allowing for a subsequent prioritization of an object of interest, and teachings of Urushido as applied to target object portions are readily extended to instances involving a plurality of objects and respective baseline selections accordingly. Fujiwara further evidences the obvious nature of dividing an image into a plurality of grid/block portions which are then individually analyzed, reading on a case where there are a plurality of regions where a target object exists, and performing analysis for each region (Fig. 3, regions A1-A9, [0066-0067], etc., calculating evaluation values related to a focus state (characterized by edge sharpness/degree of high frequency components)). POSITA would further recognize such a partitioning may minimize the impact of any outlying information if present in only a few/minority of the corresponding regions. It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Urushido to further comprise partitioning images into distinct sub-regions and analyzing each individually as taught/suggested by Fujiwara and Ha, and/or in addition to determining an optimum baseline for one target object, repeat such a determination for a plurality of objects and/or object portions as suggested by Urushido, the motivation as similarly taught/suggested therein such an analysis of a plurality of regions would enable handling multiple target objects and/or target object portions of higher priority. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to IAN L LEMIEUX whose telephone number is (571)270-5796. The examiner can normally be reached Mon - Fri 9:00 - 6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IAN L LEMIEUX/Primary Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Sep 27, 2023
Application Filed
Sep 19, 2025
Non-Final Rejection — §102, §103
Oct 31, 2025
Response Filed
Jan 21, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602825
Human body positioning method based on multi-perspectives and lighting system
2y 5m to grant Granted Apr 14, 2026
Patent 12592086
POSE DETERMINING METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586397
METHOD AND APPARATUS EMPLOYING FONT SIZE DETERMINATION FOR RESOLUTION-INDEPENDENT RENDERED TEXT FOR ELECTRONIC DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579840
BEHAVIOR ESTIMATION DEVICE, BEHAVIOR ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573086
CONTROL METHOD, RECORDING MEDIUM, METHOD FOR MANUFACTURING PRODUCT, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.6%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month