Prosecution Insights
Last updated: April 19, 2026
Application No. 18/384,770

Dimensional Measurement Method Based on Deep Learning

Final Rejection §102§103§DP
Filed
Oct 27, 2023
Examiner
SUMMERS, GEOFFREY E
Art Unit
2669
Tech Center
2600 — Communications
Assignee
UNITX, INC.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
249 granted / 348 resolved
+9.6% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Response to Amendment Claims 1-20 were previously pending. Applicant’s amendment filed January 26, 2026, has been entered in full. Claims 1, 4-7, 9-13, and 17-20 are amended. No claims are added or cancelled. Accordingly, claims 1-20 are now pending. Response to Arguments Applicant has filed a terminal disclaimer (Remarks filed January 26, 2026, hereinafter Remarks: Page 11). Accordingly, the previous nonstatutory double patenting rejection is withdrawn. Applicant has amended the claims to correct informalities (Remarks: Page 11). Accordingly, the previous objections to the claims are withdrawn. Applicant has amended claims 12 and 18 and argues that they should not be interpreted under 35 U.S.C. 112(f) (Remarks: Page 11). Examiner agrees. The claims now recite a “camera,” which is a structural term. The previous interpretation under 35 U.S.C. 112(f) is withdrawn. Applicant argues that the amendments to the claims have overcome the previous rejections under 35 U.S.C. 112 (Remarks: Pages 11-12). Examiner agrees. The previous rejections under 35 U.S.C. 112 are withdrawn. Applicant traverses the rejections under 35 U.S.C. 102 over the Kounosu reference (Remarks: Pages 12-13). In particular, Applicant acknowledges that “Kounosu teaches identifying a ‘position’ or a ‘location’ of each hole with respect to the workpiece”, but argues that this is not a “measurement” (Remarks: Page 13). Examiner respectfully disagrees. Claims are given their broadest reasonable interpretation (BRI) during examination. MPEP 2111. Under BRI, the words of a claim are given their plain meaning, unless such meaning is inconsistent with the specification. MPEP 2111.01, Subsection I. The plain meaning of a term is the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. Id. The plain meaning of “measurement” clearly includes determining a position or location of something – i.e., measuring the position or location. In Konousu, the position or location of a hole is being measured with respect to the workpiece. Consider, for example, the following U.S. Patents that refer to determining the position or location of a hole as measurement: ‘Hull’ (US 11,017,559 B2) – Throughout, including at Col. 1 ‘Yamaoka’ (US 5,771,309) – Throughout, including at Abstract ‘Beeson’ (US 4,754,417) – Throughout, including Col. 5, line 62 et seq. This plain meaning is not inconsistent with the specification and Applicant has not presented any specific argument that it is. For at least these reasons, Applicant’s arguments are respectfully non-persuasive. Applicant traverses the rejections under 35 U.S.C. 102 over the Shin reference (Remarks: Pages 14-16). In particular, Applicant argues that “Shin teaches measuring the ‘shortest distance between the two points of the six axes for grasping orientation’ and not measuring ‘the target object’” (Remarks, emphasis in original). Examiner respectfully disagrees. Shin draws “straight lines” (also referred to as axes) out from the center point of a target object. Section III, 2nd-to-last paragraph. “[T]hese lines find two points that meet the edge of the object” (emphasis added). Id. This shows that the two points lie not only on the straight line axis, but also on the target object. Therefore, determination of “the axis having the shortest distance between the two points of the six axes” includes measuring the length of target object along each of the six axes because the “distance” is the distance between two points on the edge of the object. Shin is measuring a distance between points on edge the object, with the axes simply defining the lines along which the target object is being measured. This is analogous to measuring an object’s length with a tape measure, the tape measure defining an axis of measurement and the length being measured as the difference between two edge points of the object along that tape measure axis. Applicant traverses the previous rejections under 35 U.S.C. 103 for substantially the same reasons discussed above with respect to the rejections under 35 U.S.C. 102 (Remarks: Pages 16-17). Examiner respectfully disagrees, for substantially the same reasons presented above. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 and 3-7 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by ‘Kounosu’ (WO 2021/177245 A1; cited in parent application no. 17/411,339). Regarding claim 1, Kounosu discloses a dimensional measurement method based on deep learning (e.g., Figure 7; see further mapping below), the method comprising: capturing, by a processor (e.g., [0013], Fig. 1, image processing device 10, which includes CPU), an image of a target object according to a preset location precision to obtain an image (e.g., [0015], [0017], [0035], Fig. 7, step S1, wide angle image of work target is captured), wherein the location precision indicates an imaging resolution of one or more location points used to measure a specified portion of the target object (e.g., [0019], the location points used to measure the target object are positions of holes; [0017]-[0018], Fig. 3A, the preset wide-angle configuration indicates a relatively lower imaging resolution of the holes, when compared to the narrow-angle configuration shown in Fig. 3B; I.e., each hole is covered by less image area in the wide angle view as opposed to the narrow-angle view); determining, by the processor (see above), at least one target region from the image (e.g., [0018], Fig. 3A, area 201 is determined as a target region), with each target region including at least one of the one or more location points (Fig. 3A, the target region 201 includes the holes); processing, by the processor (see above), at least one target region using a pre-trained neural network to obtain first position information of each location point (e.g., [0019], Fig. 3B, machine learning is used to obtain positions of each hole in image 102 of the target region; Fig. 6, [0021] et seq., the machine learning is a pre-trained neural network); and determining, by the processor (see above), a measurement of the specified portion of the target object ([0032], position of each hole with respect to workpiece is determined) according to the location precision ([0032], hole positions are determined based on pan, tilt, and zoom of camera, which is the mechanism for capturing wide or narrow angle images – [0017], [0019]) and the first position information of each location point ([0032], hole positions are determined based on their positions in the image detected by the machine learning). Regarding claim 3, Kounosu discloses the method according to claim 1, wherein when the range of positions of the location points is known (e.g., Figs. 3A-B, the holes are known to be within the bounds of the workpiece W), and the number of location points used in measuring the target object is one (While the specific example shown in Figs. 3A-B have multiple holes, the target region is defined the same way [i.e., as the region of the whole workpiece] regardless of the number of holes, including if there was one hole), the processor identifies a region corresponding to a range of positions of the location point in the image as the target region ([0018], Fig. 3A, range 201), and wherein the range of positions of the location point indicates a smallest area where the location points appear in a field of view of the image ([0018], Fig. 3A, target range 201 is bounding box – i.e. smallest rectangular area that includes workpiece where holes appear in field of view of image), and the field of view indicates the target region captured when the image capture component captures an image of the target object (e.g., Fig. 3A shows target region within field of view; e.g., Fig. 3B also shows a narrower field of view of an image captured to show only the target region). Regarding claim 4, Kounosu discloses the method according to claim 1, wherein determining, by the processor, the at least one target region from the said image to be processed, further comprises: when a range of positions of the location points is known (e.g., Figs. 3A-B, the holes are known to be within the bounds of the workpiece W, and within image 101), and a number of location points used in measuring the target object is greater than one (e.g., Figs. 3A-B show examples where the number of holes – i.e., the number of location points – is greater than one): determining, by the processor, multiple selections that choose target regions (e.g., [0017], Fig. 3A, image 101 is taken to capture workpiece target region; The region covered by image 101 is a first selection of a target region; e.g., [0018], Fig. 3A, machine learning is applied to select target region 201 within the image; The bounding box region 201 is a second, more-refined selection of a target region), based on the range of positions of each location point (both selections are determined to find the workpiece, which is the range of positions in which each location point occurs). Regarding claim 5, Kounosu discloses the method according to claim 4, wherein determining, by the processor, the at least one target region from the said image to be processed, further comprises: when a range of positions of the location points is known (e.g., Figs. 3A-B, the holes are known to be within the bounds of the workpiece W, and within image 101), and a number of location points used in measuring the target object is greater than one (e.g., Figs. 3A-B show examples where the number of holes – i.e., the number of location points – is greater than one): determining, by the processor, a total area of the target regions under the respective selections (e.g., Fig. 3A, both image 101 and bounding box 201 define total areas of the target regions for each respective selection). Regarding claim 6, Kounosu discloses the method according to claim 4, wherein determining, by the processor, the at least one target region from the said image to be processed, further comprises: when a range of positions of the location points is known (e.g., Figs. 3A-B, the holes are known to be within the bounds of the workpiece W, and within image 101), and a number of location points used in measuring the target object is greater than one (e.g., Figs. 3A-B show examples where the number of holes – i.e., the number of location points – is greater than one): setting, by the processor, a selection that has a smallest total area as the target selection (e.g., [0018], Fig. 3A, bounding box region 201 is selected and this has a smaller total area than the image region 101 within which it is placed). Regarding claim 7, Kounosu discloses the method according to claim 4, wherein determining, by the processor, the at least one target region from the said image to be processed, further comprises: when a range of positions of the location points is known (e.g., Figs. 3A-B, the holes are known to be within the bounds of the workpiece W, and within image 101), and a number of location points used in measuring the target object is greater than one (e.g., Figs. 3A-B show examples where the number of holes – i.e., the number of location points – is greater than one): determining, by the processor, at least one target region from the image by a target selection (e.g., [0018], Fig. 3A, target region 201). Claim(s) 12-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ‘Shin’ (“Integration of deep learning-based object recognition and robot manipulator for grasping objects,” 2019). Regarding claim 12, Shin discloses a dimensional measurement method based on deep learning (e.g., Fig. 2), comprising: capturing, by a camera (e.g., Fig. 2, RGB-D Camera; Section V, 1st paragraph), an image of a target object (e.g., Figs. 3 and 4, image of various target objects); identifying, by a processor (see, e.g., Fig. 2, PC; Sec. II.B, last par.; Sec. V, 1st par.), one or more target regions within the image of the target object (e.g., Fig. 2, Mask R-CNN; Sec. III, Mask R-CNN is applied on each frame, which provides masks identifying target regions within the image; Fig. 4(c) shows an example of the mask output [best seen in color]) that include one or more location points (e.g., Sec. III, 2nd-to-last par., points along the edges of the regions are location points used for measuring); identifying, by the processor using deep learning, the one or more location points within the one or more target regions (e.g., Sec. III, 2nd-to-last par., results of Mask R-CNN deep learning are used to set a center of gravity and straight lines are drawn within the region to find points that meet the edge of the object; The points that meet the edge of the object are the location points, and the line-drawing process is the identifying of the location points); determining, by the processor, a measurement of at least a portion of the target object identified by the one or more target regions within the image of the target object (e.g., Sec. III, 2nd-to-last par., “the axis having the shortest distance between the two points of the six axes is selected for grasping orientation”; The determination of the distances between the points along the axis is a measurement of at least a portion of the target object at least because, as noted above, the points lie on the object’s edges; I.e., the distances between the points are measurements of the length of the target object along each of the axes). Regarding claim 13, Shin discloses the dimensional measurement method based on deep learning of claim 12, further comprising: identifying, by the processor, a second set of one or more location points in the one or more target regions with in the image of the target object (e.g., Sec. III, 2nd-to-last par., six axes are drawn, and two edge points are found for each axis; The edge points are location points, and the edge points found for each axis are a different set; I.e., the six axes produce six sets of edge/location points, one of which is a “second” set); and determining, by the processor, a second dimensional measurement of the target object based on the target object (e.g., Sec. III, 2nd-to-last par., dimensional measurements of the target object are determined for each of the six axes/sets of location points in order to select a grasping orientation). Regarding claim 14, Shin discloses the dimensional measurement method based on deep learning method of claim 12, further comprising: adjusting an orientation of the target object, by robotic manipulation, based on the determined dimensional measurement (e.g., Sec. III, 2nd-to-last par., dimensional measurement is used to determine grasping orientation; e.g., Figs. 6-7, Sec. V, target object such as the teddy bear is picked up and placed in bin by robotic grasping manipulation, its orientation being adjusted during this process). Regarding claim 15, Shin discloses the dimensional measurement method based on deep learning method of claim 12, further comprising: placing the target object, by robotic manipulation, in another part based on the determined dimensional measurement (e.g., Sec. III, 2nd-to-last par., dimensional measurement is used to determine grasping orientation; e.g., Figs. 6-7, Sec. V, target object such as the teddy bear is picked up and placed in bin by robotic grasping manipulation, the bin being the “another part”). Regarding claim 16, Shin discloses the dimensional measurement method based on deep learning method of claim 12, further comprising: providing, by the processor, dimensional data of the target object (e.g., Sec. III, 2nd-to-last par., dimensional data is provided for selection of grasping orientation). Regarding claim 17, Shin discloses the dimensional measurement method based on deep learning method of claim 12, wherein the identifying the one or more location points within the one or more target regions further includes identifying first position information of each of the one or more location points by deep learning in a neural network (e.g., Fig. 2, Mask R-CNN is a deep learning neural network; e.g., Sec. III, 2nd-to-last par., Mask R-CNN deep learning is used to define the positions of the center of gravity and axes on which the object edge location points are identified; For at least this reason, first position information of each of the points is identified by deep learning in a neural network). Regarding claim 18, Examiner notes that the claim recites a device, comprising a camera which captures an image of a target object; a memory device; and a processor which: implements a method that is substantially the same as the method of claim 12. Shin discloses the method of claim 12 (see above). Shin further discloses implementing its method as a device, comprising a camera which captures an image of a target object (e.g., Fig. 2, RGB-D Camera; Section V, 1st paragraph); a memory device (e.g., Sec. V, 1st par., main PC and/or GTX 1080Ti GPUs); and a processor (e.g., Sec. V, 1st par., main PC and/or GTX 1080Ti GPUs). Accordingly, Shin also discloses the invention of claim 18 for substantially the same reasons as claim 12. Regarding claim 19, Examiner notes that the claim recites limitations that are substantially the same as limitations recited in claim 13. Shin discloses the invention of claim 13. Accordingly, claim 19 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shin for substantially the same reasons as claim 13. Regarding claim 20, Examiner notes that the claim recites limitations that are substantially the same as limitations recited in claim 17. Shin discloses the invention of claim 17. Accordingly, claim 20 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shin for substantially the same reasons as claim 17. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kounosu in view of ‘Carion’ (“End-to-End Object Detection with Transformers,” 2020; cited in parent application no. 17/411,339). Regarding claim 2, Kounosu teaches the method of claim 1 (see above). Kounosu obtains position information of location points using an object detection neural network (e.g., [0019], [0021]). Kounosu teaches several examples of object detection neural networks ([0027]) and teaches that various object detections based on machine learning can be used, without being limited to the given examples ([0027]). Nevertheless, Kounosu does not explicitly teach an example where the neural network includes at least one sub-network that corresponds to the target region, and the said sub-network includes an encoder and a decoder, and the pre-trained neural network processes at least one target region to obtain position information of each location point, and wherein the processor performs feature extraction on the at least one target region by obtaining a feature map of the target region by the encoder in the sub-network corresponding to the target region; and wherein the processor processes the feature map to obtain position information of each location point in the target region by the decoder in the sub-network. However, Carion does teach an approach for object detection based on machine learning (e.g., Fig. 2, DETR), where a neural network (Fig. 2, DETR network) includes at least one sub-network that corresponds to the target region (Fig. 2, encoder-decoder subnetwork that processes input image, which is the target region in Kounosu), and the said sub-network includes an encoder and a decoder (Fig. 2, encoder and decoder), and the pre-trained neural network processes at least one target region to obtain position information of each location point (Fig. 2, right, object detections), and wherein the processor performs feature extraction on the at least one target region by obtaining a feature map of the target region by the encoder in the sub-network corresponding to the target region (e.g., Fig. 2, image is processed through backbone and then through encoder, which obtains transformed feature map of input image – i.e. the target region); and wherein the processor processes the feature map to obtain position information of each location point in the target region by the decoder in the sub-network (Fig. 2, decoder processes features passed from encoder in order to predict object positions marked by bounding boxes). Carion teaches that its DETR model achieves similar performance to Faster RCNN (Section 4.1), which is one of the types of models suggested by Kounosu ([0027]), and provides an advantageously simplified detection (e.g., Page 214, middle paragraph). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Kounosu with the DETR object detection of Carion in order to improve the method with the reasonable expectation that this would result in a method that used a suitable object detector with an advantageously simplified detection pipeline. This technique for improving the method of Kounosu was within the ordinary ability of one of ordinary skill in the art based on the teachings of Kounosu and Carion. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Kounosu and Carion to obtain the invention as specified in claim 2. Claim(s) 8-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kounosu in view of ‘Růžička’ (“Fast and accurate object detection in high resolution 4K and 8K video using GPUs,” 2018; cited in parent application no. 17/411,339). Regarding claim 8, Kounosu teaches the method of claim 1 (see above). Kounosu teaches, when the range of positions of the location points is unknown, applying a machine learning object detector to determine at least one target region (e.g., [0018], Fig. 3A). As the object detector, Kounosu lists examples including RCNN, YOLO and SSD ([0027]), but states that other object detectors can be used ([0028]). Kounosu does not explicitly teach downsampling, by the processor, the image according to a preset downsampling ratio to obtain an intermediate image. However, Růžička does teach an object detector that identifies a target region of an image by: downsampling, by the processor, the image according to a preset downsampling ratio to obtain an intermediate image (e.g., Fig. 3, stage I, downsampling from 2160x2160 to 608x608). Růžička teaches that current state of the art models, such as YOLO, are “focused on working with low-resolution images” (Sec. 1, 2nd and 3rd pars.), but that such low-resolution images lose a lot of detail relative to what can be captured with modern high-resolution cameras (Sec. 1, 4th par.). Růžička teaches that its approach can achieve fast performance without losing details such as small objects due to downscaling (Sec. 4). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Kounosu with the object detector of Růžička in order to improve the method with the reasonable expectation that this would result in a method whose object detection was fast and advantageously able to detect small details. This technique for improving the method of Růžička was within the ordinary ability of one of ordinary skill in the art based on the teachings of Kounosu and Růžička. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Kounosu and Růžička to obtain the invention as specified in claim 8. Regarding claim 9, Kounosu in view of Růžička teaches the method of claim 8, and Růžička further teaches when determining by the processor at least one target region from the image to be processed (i.e., as part of object detection – see rejection of claim 8), further comprising: determining, by the processor, second position information of each location point in the intermediate image (e.g., Fig. 3, stage I, initial YOLO object detection produces detections in downscaled image; In the context of Kounosu, these detections are positions of workpiece, which are coarse positions of each location point since they are all included in the workpiece). Regarding claim 10, Kounosu in view of Růžička teaches the method of claim 9, and Růžička further teaches when determining by the processor at least one target region from the image to be processed (i.e., as part of object detection – see rejection of claim 8), further comprising: determining, by the processor, third position information of each location point in the image to be processed according to the second position information, wherein the third position information indicates the positions of the location points in the image to be processed, the position information of the location points in the image corresponding to the second position information (e.g., Fig. 3, stage II, bounding boxes from stage I YOLO are placed on corresponding positions of full-resolution input image). Regarding claim 11, Kounosu in view of Růžička teaches the method of claim 10, and Růžička further teaches when determining by the processor at least one target region from the image to be processed (i.e., as part of object detection – see rejection of claim 8), further comprising: determining, by the processor, at least one target region from the image to be processed, according to the third position information (e.g., Fig. 3, processing continues through remainder of stage II and stage III to produce output bounding boxes) and preset dimensions (e.g., Fig. 3, the processing to produce the output target region bounding boxes is based on various preset dimensions, such as the preset model input dimensions of 608x608 pixels). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEOFFREY E SUMMERS whose telephone number is (571)272-9915. The examiner can normally be reached Monday-Friday, 7:00 AM to 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEOFFREY E SUMMERS/Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Oct 27, 2023
Application Filed
Oct 21, 2025
Non-Final Rejection — §102, §103, §DP
Jan 26, 2026
Response Filed
Feb 06, 2026
Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586379
SYSTEM FOR DETECTING OCCURRENCE PERIOD OF CYCLICAL EVENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561755
System and Method for Image Super-Resolution
2y 5m to grant Granted Feb 24, 2026
Patent 12555205
METHOD AND APPARATUS WITH IMAGE DEBLURRING
2y 5m to grant Granted Feb 17, 2026
Patent 12541838
INSPECTION APPARATUS AND REFERENCE IMAGE GENERATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536682
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+35.4%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month