Prosecution Insights
Last updated: April 19, 2026
Application No. 18/580,681

COUNTING DEVICE, COUNTING METHOD, AND RECORDING MEDIUM

Final Rejection §103
Filed
Jan 19, 2024
Examiner
CHEN, BIAO
Art Unit
2611
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
27 granted / 32 resolved
+22.4% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
69.1%
+29.1% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 32 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s amendment/response filed on 12/19/2025, which has been entered and made of record. Applicant’s amendments to the Specification and Claims have overcome each and every objection and 1112(b) rejection previously set forth in the Non-Final Office Action mailed 09/19/2025. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda et al. (WO 2021/060077 A1, hereinafter “Ueda_WO”) in view of Jia et al. (US 11019316 B1, hereinafter “Jia”), Kikuchi et al. (US 20050093977 A1, hereinafter “Kikuchi”), and Cherevatsky et al. (US 9324145 B1, hereinafter “Cherevatsky”). [Examiner’s notes: Ueda et al. (WO 2021/060077 A1, hereinafter “Ueda_WO”) was published on 04/01/2021, and is a family member of Ueda et al. (EP 4035801 A1, hereinafter “Ueda_EP”. For examination of this application, it has been assumed that the contents of Ueda_EP and Ueda_WO are identical to reach other. Hence, Ueda_EP’s citations as well as corresponding para. numbers are used as follows because it is published in English as one of EPO languages.] Regrading claim 1, Ueda discloses counting device comprising: a memory storing instructions; and at least one processor configured to execute the instructions to: (Ueda_EP, para. [0016], “the fish counting system 3 includes an image acquisition unit 30, a counting unit 34, a result provision unit 36, a correction unit 37, a fish count change display provision unit 38, and the storage unit 39. Each of these units 30 to 38 is realized by the cooperation of hardware and software by executing a predetermined program on one or more processors included in a computer. The storage unit 39 is a memory or storage or the like”; para. [0056], “each of the units may be configured by a dedicated memory”). execute detection processing comprising detecting a moving object to be counted from each of a plurality of frame images constituting a captured moving image in which the moving object to be counted is captured; (paras. [0017]-[0019], “The image acquisition unit 30 shown in FIG. 1 acquires a plurality of images by capturing the image capturing region in which a fluid including the fish 1 flows on a time-series basis. The camera 2 captures a moving image at, for example, 30 fps … The counting unit 34 shown in FIG. 1 counts the number of fish based on the plurality of images. the counting unit 34 extracts fish by image processing from the individual images obtained in a time series, and assigns individual identification labels to the extracted fish to identify the individual fish based on locations of the fish in past images”). Note that: (1) a fish as a moving objected is to be counted from a plurality of acquired frame images on a time-series basis (video at 30 fps) formulating a captured moving image; and (2) the fish is extracted and labelled for detection and counting. … for each of a plurality of moving objects to be counted that moves in a direction from one side to another side of edge portions facing each other in the frame image, … (FIG. 2: “Ar2” and “Ar1” faces each other in the frame image while one of fish moves in a direction from “Ar1” to “Ar2”; FIG3: a plurality of fish can be regarded a plurality of moving objects). select some of the plurality of frame images, each as a selection image, in such a way that capturing order is intermittent among the plurality of frame images constituting the captured moving image, (para. [0014], “By installing the camera 2 in such a predetermined direction, the fish moving from the first fish tank to the second fish tank always pass through the first region Ar1 before reaching the second region Ar2”; FIG. 3: a frame image 62 showing the counted fish superimposed with count completion marks 61 to show a counted fish at least once; paras. [0020], “the counting unit 34 also stores in the storage unit 39 mark data D4 indicating positions of count completion marks 61 (refer to FIG. 3) which indicate counted fish, for individual frames (refer to FIG. 1). The mark data D4 is used to display the count completion marks 61 (refer to FIG. 3) indicating counted fish in an image. In this embodiment, the marks are ROI (Region of Interest) indications, and rectangular frames are superimposed on images of the fish, but the marks are not limited to these, and marks of various shapes or colors may be employed.”). Note that: (1) the marks 61 with as rectangular frames for region Ar2 indicate the to-be-counted fish are counted; (2) the frame image 62 is selected as a selection image to show that a fish is counted at least once as shown; (3) It is obvious to one having ordinary skills in the art that the capturing order of the image frames in which the same counted fish shows from Ar1 to Ar2 depends on the time length that a counted fish travels from Ar1 to Ar2 along with the frame rate of the video. Therefore, the capturing order of the image frames is intermittent. display the selection image on a display device in a mode in which detection result information indicating the moving object to be counted that has been detected in the detection processing is superimposed; (para. [0021], “FIG. 3 is a diagram illustrating an example of a screen including a counting result display 60. The result provision unit 36 shown in FIG. 1 provides, as shown in FIG. 3, the counting result display 60 including an image 62 with the count completion marks 61 indicating counted fish. The count completion marks 61 are rectangular frames superimposed on the images of the fish indicated by dashed-dotted lines in FIG. 3, but are not limited to these”; FIG. 3: image 62 on the display 60). Note that: image 62 with the count completion marks 61 indicating counted fish can be regarded as a selection image. receive, as correction information, a command to delete the detection result information superimposed on the displayed selection image or a command to add detection result information indicating the moving object to be counted to the selection image; (para. [0022], “The correction unit 37 shown in FIG. 1 accepts a correction operation and corrects the number of fish … when the addition instruction indication 65 is pressed p times, the correction value 67 is +p. p is a natural number equal to or larger than 1. Similarly, when the subtraction instruction indication 66 is operated, the correction value 67 of the number of fish in the currently displayed image (frame) is decremented by 1. The correction value 67 corresponds to the number of times the subtraction instruction indication 66 is operated”; para. [0046], “With this configuration, the addition instruction indication 65 or the subtraction instruction indication 66 may be operated when the image 62 with the count completion marks 61 is visually viewed and a counting omission in which the count completion mark 61 is not assigned or miscounting in which the count completion mark 61 is mistakenly assigned is recognized, and accordingly, efficiency of the correction operation may be improved”; para. [0030], “It is preferred that the result provision unit 36 provides an image with a manual correction mark indicating the corrected fish specified in the correction operation along with the count completion marks 61 … FIG. 5 is a diagram illustrating an example of a screen including the image 62 with a manual correction mark M.”). Note that: (1) After accepting a correction instruction or command, the counting numbers can be increased or decreased correspondingly; (2) for manual correction a manual correction M mark as a circle can be superimposed on the fish for counting correction; and (3) since the mark 61 is a mark (rectangular box) assigned to the counted fish, it is obvious to one having ordinary skills in the art to make corrections (adding or subtracting the fish count) while assigning or de-assigning the mark 61 superimposed on the corresponding fish. count a plurality of the moving objects to be counted appearing in the captured moving image by using a result of the detection processing corrected in accordance with the correction information; and (para. [0025], “The fish count change display provision unit 38 shown in FIG. 1 provides the fish count change display 70 shown in FIG. 3. This change display 70 includes an indication 71 corresponding to the number of fish counted in a unit of time and the indication 71 is represented on a time-series basis. In this embodiment, as shown in FIG. 3, the fish count change display 70 includes the graph indication 71”; FIG. 3: a plurality of fish as moving objects with markers 61, total fish count 63). Note that: based on the corrected detection result, the fish count change display provision unit 38 performs the counting process the moving objects appearing in the captured moving image (the video frames) and provides the fish count change display 70 and the total count 63. output information indicating the counted number of the moving objects to be counted as a counting result. (FIG. 3: item 63 is the total count of the counted fish as a counting result; para. [0021], “The counting result display 60 shown in FIG. 3 includes a total value 63 for an entire moving image (12345 in the figure)”; paras. [0026]-[0027], “the number of fish counted per unit time is shown in a line graph”). However, Ueda_EP fails to disclose, but in the same art of computer graphics and image processing, Jia discloses execute tracking processing comprising tracking a same moving object to be counted captured in the plurality of frame images; (Jia, col. 2, line 17-20, “Multiple frames of the same moving object are tracked sequentially. Image processing such as correction for distortion and extraction of feature enables identification and tracking of the same object across multiple frames”; col. 6, lines 53-54, “Multiple objects in one frame can be tracked simultaneously”). Note that: (1) the same object is tacked in a plurality of frame images in which the object is captured; and (2) the purpose of the tracking can be to count the objects. Ueda_EP and Jia are in the same field of endeavor, namely computer graphics and image processing. Before the effective filing date of the claimed invention, it would have been obvious to apply tracking one or more objects, as taught by Jia into Ueda_EP. The motivation would have been “Multiple objects in one frame can be tracked simultaneously” (Jia, col. 6, lines 53-54). The suggestion for doing so would allow them to track one or more objects simultaneously. Therefore, it would have been obvious to combine Ueda_EP and Jia. However, Ueda_EP in view of Jia fails to disclose, but in the same art of computer graphics, Kikuchi discloses determine, for each of a plurality of moving objects to be counted that moves in a direction from one side to another side of edge portions facing each other in the frame image, a number of frame images that are required from a frame-in, at which each of the plurality of moving objects appears in the frame image, to a frame-out, at which each of the plurality of moving objects no longer appears in the frame image, using a result of the tracking processing; (Kikuchi, paras. [0008]-[0009]), “and a frame-out detecting unit for detecting frame-out of the attention object out of the tracking Video … a frame-in detecting unit for detecting frame-in of a display prohibition object into a filtered video, wherein the display unit may receive a Selection of an object”). Note that: all frame images of the frame-in frame image and frame-out frame images and the frames between them are determined as a number of frame images for each of the plurality of moving objects, and the number of frames can also be counted and recorded. calculate, for the plurality of moving objects to be counted, an average of the number of frame images; Note that: it is obvious to one having ordinary skills in the art that the numbers of frame images above can be averaged to formulate an average value. Ueda_EP in view of Jia, and Kikuchi, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply determining object frame-in / frame-out frames and frames between them, as taught by Kikuchi into Ueda_EP in view of Jia. The motivation would have been “a frame in and/or out detecting unit for detecting frame-out of the attention object out of the tracking video,” (Jia, col. 6, lines 53-54). The suggestion for doing so would allow to determine the frame-in / frame-out frames and the frames between them and the numbers of frames being counted. Therefore, it would have been obvious to combine Ueda_EP, Jia, and Kikuchi. However, the combination of Ueda_EP, Jia, and Kikuchi fails to disclose, but in the same art of computer graphics and image processing, Cherevatsky discloses … by selecting the frame images in the capturing order as the selection image for every number of frame images equal to the average; (Cherevatsky, lines 58-63 “one out of every predetermined number of captured frames may be selected for segment analysis. In one example, 2,000 images may be selected from the image stream, e.g. one out of every 100 images may be selected in a sequence of 200,000 sequentially numbered frames in the received image stream”). Note that: (1) the frames in the image stream can be selected every predefined number (e.g., 100) of the captured frame in the capturing order; and (2) the predefine number can be equal to the average number. The combination of Ueda_EP, Jia, and Kikuchi, and Cherevatsky, are in the same field of endeavor, namely computer graphics and image processing. Before the effective filing date of the claimed invention, it would have been obvious to apply selecting every predefined number of frames, as taught by Cherevatsky into the combination of Ueda_EP, Jia, and Kikuchi. The motivation would have been “one out of every predetermined number of captured frames may be selected for segment analysis” (Cherevatsky, col. 20, lines 58-59). The suggestion for doing so would allow to select frames every predefined number of frame images. Therefore, it would have been obvious to combine Ueda_EP, Jia, Kikuchi, and Cherevatsky. Regarding claim 3, the combination of Ueda_EP, Jia, Kikuchi, and Cherevatsky discloses The counting device according to claim 1, wherein a plurality of intermittent display modes having different intervals of the capturing order of the frame images selected as the selection image is set, and (Cherevatsky, lines 58-63 “one out of every predetermined number of captured frames may be selected for segment analysis. In one example, 2,000 images may be selected from the image stream, e.g. one out of every 100 images may be selected in a sequence of 200,000 sequentially numbered frames in the received image stream”). Note that: (1) the frames or images in the image stream can be selected every predefined number (e.g., 100) of the captured frame in the capturing order; and (2) the predefine number can be equal to a set of predefine numbers (e.g., 100, 200, …, 500), resulting in a plurality of intermittent display modes having different corresponding intervals of the capturing order of a frame image selected as the selection image. wherein the at least one processor is further configured to execute the instructions to: receive information indicating the intermittent display mode alternatively selected from a plurality of the intermittent display modes as mode selection information; and. Note that: (1) a predefine number (e.g., 100) corresponding to the intermittent display mode indicating the intermittent display mode alternatively selected from a plurality of the intermittent display modes corresponding to the set of predefine numbers (e.g., 100, 200, …, 500) can be selected; (2) a predefined number (e.g., 100) can be regarded as the mode selection information being received. select the selection image in the intermittent display mode according to the mode selection information. Note that: the selection image can be selected or chosen from the captured frames in capturing order in the corresponding intermittent display mode according to the determined predefine number (i.e., the mode selection information) from a plurality of the intermittent display modes. The motivation to combine Ueda_EP, Jia, Kikuchi, and Cherevatsky given in claim 1 is incorporated here. Regrading claim 4, the combination of Ueda_EP, Jia, Kikuchi, and Cherevatsky discloses The counting device according to claim 1, wherein the detection result information is a frame-shaped figure surrounding an object detected as the moving object to be counted in the frame image. (Ueda_EP, para. [0030], “Examples of the method for specifying positions of fish include a method for specifying positions by enclosing the fish in rectangular frames and a method for specifying coordinates in the image 62”). Note that: the rectangular frames enclosing the fish are frame-shaped figures surrounding the fish or objects detected as the moving objects. Claim 6 reciting “A counting method comprising:” is corresponding to the device of claim 1. Therefore, claim 6 is rejected for the same rationale for claim 1. In addition, the combination of Ueda_EP, Jia, Kikuchi, and Cherevatsky discloses A counting method comprising: (Ueda_EP, Title, “FISH COUNTING SYSTEM, FISH COUNTING METHOD, AND PROGRAM”). Claim 7 reciting “A non-transitory recording medium that records a program for causing a computer to execute:” is corresponding to the device of claim 1. Therefore, claim 6 is rejected for the same rationale for claim 1. However, the combination of Ueda_EP, Jia, and Kikuchi fails to disclose, but in the same art of computer graphics and image processing, Cherevatsky discloses A non-transitory recording medium that records a program for causing a computer to execute: (Cherevatsky, col. 34 / line 64 – col. 35 / line 4, “Embodiments of the invention may include an article such as a non-transitory computer or processor readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein”). The combination of Ueda_EP, Jia, and Kikuchi, and Cherevatsky are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply a non-transitory computer-readable storage or recording media storing instructions, as taught by Peng into Ueda_WO. The motivation would have been “Embodiments of the invention may include an article such as a non-transitory computer or processor readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein” (Cherevatsky, col. 34 / line 64 – col. 35 / line 4). The suggestion for doing so would allow to have a non-transitory computer-readable media storing instructions. Therefore, it would have been obvious to combine Ueda_EP, Jia, Kikuchi, and Cherevatsky. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Ueda_EP, Jia, Kikuchi, Cherevatsky, and Athira et al. (Underwater Object Detection model based on YOLOv3 architecture using Deep Neural Networks, 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Publication Date: 2021-03-19, pp. 40-45, hereinafter “Athira”). Regrading claim 5, the combination of Ueda_EP, Jia, Kikuchi, Cherevatsky discloses The counting device according to claim 4, wherein the at least one processor is further configured to execute the instructions to: … the moving object to be counted that has been detected by the detection processing; and (Ueda_EP, paras. [0017]-[0019], “The image acquisition unit 30 shown in FIG. 1 acquires a plurality of images by capturing the image capturing region in which a fluid including the fish 1 flows on a time-series basis. The camera 2 captures a moving image at, for example, 30 fps … The counting unit 34 shown in FIG. 1 counts the number of fish based on the plurality of images. the counting unit 34 extracts fish by image processing from the individual images obtained in a time series, and assigns individual identification labels to the extracted fish to identify the individual fish based on locations of the fish in past images”). Note that: (1) a fish as a moving object is to be counted from a plurality of acquired frame images on a time-series basis (video at 30 fps) formulating a captured moving image; and (2) the fish is extracted and labelled for detection and counting. However, the combination of Ueda_EP, Jia, Kikuchi, Cherevatsky fails to discloses, but in the same art of computer graphics, Athira discloses output information indicating a confidence level that indicates a certainty of the moving object to be counted that has been detected by the detection processing, and (Athira, Fig. 5: “Object Detection Methodology and bounding box with objectness score”, “Input underwater video”, “YOLOv3 for detection”; page 44, col. right, para. 3, “The output of YOLO consist of the confidence score and class ID of the corresponding object class present in the bounding box as shown in Fig. 7”). Note that: (1) the dynamic moving objects (fish) are detected and enclosed in bounding boxes using YOLOv3; and (2) the output of algorithm includes the bounding boxes and confidence scores (e.g., 1.0). vary a display mode of a frame-shaped drawing, which is the detection result information to be superimposed on the selection image, according to the confidence level. (page 44, col. left, Fig. 7: labels of confidence values are attached to the bounding boxes to formulate a varied display mode of a frame-shaped drawing, which includes the detection results (i.e., bounding boxes and corresponding confidence scores) and is superimposed on the selection image (i.e., the image in Fig. 7). The combination of Ueda_EP, Jia, Kikuchi, and Cherevatsky, and Athira, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply a dynamic object detection with bounding boxes and corresponding confidence scores, as taught by Athira into the combination of Ueda_EP, Jia, Kikuchi, and Cherevatsky. The motivation would have been “This paper aims to propose a model to automatically detect underwater object using YOLOv3 architecture with darknet framework and deep learning” (Athira, Abstract). The suggestion for doing so would allow to have a non-transitory computer-readable media storing instructions. Therefore, it would have been obvious to combine Ueda_EP, Jia, Kikuchi, Cherevatsky, and Athira. Response to Arguments Applicant's arguments with respect to claim rejection 35 U.S.C. 103, have been fully considered but they are not persuasive. Applicant alleges, “Applicant respectfully traverses all of these rejections” (page 10, line 5). However, Examiner respectfully disagrees about the respective allegations as whole because: Applicant has not provided clear basis or explanations for the statement of “Applicant respectfully traverses all of these rejections”. The arguments are not persuasive. Applicant alleges, “Nevertheless, without conceding to the merits of the Examiner's rejections, the claims have been amended, as set forth above. For instance, independent claims 1, 6 and 7 have been amended, as set forth above, to incorporate features of claim 2, which has not been rejected under 35 U.S.C. § 103. Therefore, Applicant respectfully submits that claims 1, 6 and 7 satisfy the requirements of 35 U.S.C. § 103 for at least these reasons.” (page 10, lines 6-11). As described in the previous Non-Final Rejection Office Action, the original claims 2 and 3 are rejected under 112(b) for indefiniteness due to the lack of clearly defining the scope, and prior art rejection cannot be reasonably applied. In addition, the arguments are respectfully mooted because the corresponding newly amended limitations to claims 1, 6, and 7 have been addressed in the detailed claim rejection 35 U.S.C. 103 above. The arguments are not persuasive. Applicant alleges, “Moreover, claims 3-5 are patentable at least by virtue of their dependency and by virtue of the additionally recited features therein.” (page 10, line 5). However, Examiner respectfully disagrees about the respective allegations as whole because: claims 3-5 are rejected for the respective rationale above. The arguments are not persuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Biao Chen/ Patent Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jan 19, 2024
Application Filed
Sep 16, 2025
Non-Final Rejection — §103
Dec 19, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602873
AUTOMATIC RETOPOLOGIZATION OF TEXTURED 3D MESHES
2y 5m to grant Granted Apr 14, 2026
Patent 12597149
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR NETWORK COMMUNICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12562138
METHOD AND SYSTEM FOR COMPENSATING ANTI-DIZZINESS PREDICTED IN ADVANCE
2y 5m to grant Granted Feb 24, 2026
Patent 12561897
COMPRESSED REPRESENTATIONS FOR APPEARANCE OF FIBER-BASED DIGITAL ASSETS
2y 5m to grant Granted Feb 24, 2026
Patent 12548129
APPARATUSES, METHODS AND COMPUTER PROGRAMMES FOR USE IN MODELLING IMAGES CAPTURED BY ANAMORPHIC LENSES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+26.3%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 32 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month