Prosecution Insights
Last updated: April 19, 2026
Application No. 18/173,733

MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM

Final Rejection §103
Filed
Feb 23, 2023
Examiner
BURLESON, MICHAEL L
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
3 (Final)
75%
Grant Probability
Favorable
4-5
OA Rounds
2y 10m
To Grant
68%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
365 granted / 489 resolved
+12.6% vs TC avg
Minimal -6% lift
Without
With
+-6.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
36 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 489 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Applicants Remarks pages 6-10, filed 12/11/25, with respect to the rejection(s) of claim(s) 1 and 5-11 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Okabe et al US 20180177446. Regarding 35 USC 101, Applicant has amended the claims to further define practical application of the claimed invention. Examiner agrees with Applicant. The rejection is withdrawn. Regarding claim 1, Applicant states that prior art of record fails to teach erase the mark added to the region of attention from the medical image and maintain the mark added to the non-attention region of interest to generate a non-attention image (Applicants Remarks pages 8-9). Examiner agrees with Applicant. Okabe et al teaches a message “Not gazed!” (mark) is displayed along with marks 52A and 52B on the display unit 24 (fig 7 and paragraph 0092). Note: “Not gazed” is read as a mark in a non-attention region). display control is performed to enable the person who reads the image to easily identify only a candidate lesion portion that has not been gazed at among a plurality of candidate lesion portions. display control unit 36 causes the display unit 24 not to display the mark indicating a candidate lesion portion that the person who reads the image has gazed at (for example, 52B) among the plurality of candidate lesion portions and causes the display unit 24 to display only the mark indicating a candidate lesion portion that the person who reads the image has not gazed at (for example, mark 52A) (fig 3 and 7 and paragraph 0097). In a plurality of candidate lesion portions are present in the medical image 50 to be interpreted and the plurality of marks 52A and 52B are generated as the support information. display control unit 36 sequentially changes the color or pattern of a region 56 that has been gazed at and that includes the candidate lesion portion that has been gazed at in the medical image to be interpreted (52B). the display control unit 36 performs interactive display control so as to imitate an eraser gradually erasing the region 56 that has been gazed at as the line of sight moves (paragraph 0098). In other words, marks 52A and 52B are read as areas of interest (candidate lesion (paragraph 0097) with “NOT GAZED” read as non-attention region Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, 8, 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayashi et al US 20200234070 in view of Okabe et al US 20180177446. Regarding claim 1, Hayashi et al teaches a medical image processing apparatus (endoscope device 100 (paragraph 0033) comprising at least one processor (processing unit 44C (paragraph 0061-0064), wherein the processor is configured to: input a medical image to a learning model which discriminates whether each of a plurality of pixels in the medical image corresponds to a plurality of regions of interest (The detection of the lesion site refers to finding a site suspected of a lesion, such as a malignant tumor or a benign tumor (lesion candidate region), from the captured image data (paragraph 0072). The identification of the lesion site refers to identifying the type, nature, or the like of the detected lesion site, such as whether or not the lesion site detected by the detection processing is malignant or benign, what kind of disease if malignant, or how much the degree of progress of the disease is (paragraph 0073). The above detection processing and identification processing are performed by an image recognition model (for example, a neural network, a support vector machine, or the like) having a hierarchical structure and a parameter for extracting a feature amount determined by machine learning, deep learning, or the like (paragraph 0074); acquire a result of detecting the plurality of regions of interest from the learning model, wherein a mark is added to the medical image to surround each of the plurality of regions of interest (The processing unit 44C determines a region (hereinafter, referred to as an attention region) to which an attention is paid on the captured image data IM acquired by the captured image data acquisition unit 44A from the information on the coordinates of the display pixels intersecting with the operator's visual line on the display surface of the display device (paragraph 0078); detect, on the medical image, a position at which an operation is performed by an input device and specify a region surrounding the detected position among the plurality of regions of interest as a region of attention that a user has paid attention to (The processing unit 44C determines a region (hereinafter, referred to as an attention region) to which an attention is paid on the captured image data IM acquired by the captured image data acquisition unit 44A from the information on the coordinates (position) of the display pixels intersecting with the operator's visual line on the display surface of the display device 7 output from the visual-line detection unit 44B (paragraph 0078); specify a non-attention region of interest, which is a region of interest having a structure different from a structure related to the region of attention, among the plurality of regions of interest (processing unit 44C executes recognition processing using the above-described image recognition model only on the attention region of the captured image data IM determined in this way, and does not execute the above-described recognition processing on the region (hereinafter, referred to as a non-attention region) of the captured image data IM excluding the attention region (paragraph 0081) Note: lesion site P1 is the region of interest and lesion sites P2-P4 would be non-attention region of interest (paragraph 0083-0085), which are of different sizes (fig 4) ; and display the non-attention image comprising a result of specifying the non-attention region of interest on a display (the lesion sites P2 and P3 are not displayed in a highlighted manner or the identification result is not displayed (paragraph 0085) Note: although the lesion sites P2 and P3 (non-attention region of interest) are not highlighted, they are still displayed (fig 5). Hayashi et al fails to teach of erase the mark added to the region of attention from the medical image and maintain the mark added to the non-attention region of interest to generate a non-attention image; Okabe et al teaches erase the mark added to the region of attention from the medical image and maintain the mark added to the non-attention region of interest to generate a non-attention image (a message “Not gazed!” (mark) is displayed along with marks 52A and 52B on the display unit 24 (fig 7 and paragraph 0092). Note: “Not gazed” is read as a mark in a non-attention region). display control is performed to enable the person who reads the image to easily identify only a candidate lesion portion that has not been gazed at among a plurality of candidate lesion portions. display control unit 36 causes the display unit 24 not to display the mark indicating a candidate lesion portion that the person who reads the image has gazed at (for example, 52B) among the plurality of candidate lesion portions and causes the display unit 24 to display only the mark indicating a candidate lesion portion that the person who reads the image has not gazed at (for example, mark 52A) (fig 3 and 7 and paragraph 0097). In a plurality of candidate lesion portions are present in the medical image 50 to be interpreted and the plurality of marks 52A and 52B are generated as the support information. display control unit 36 sequentially changes the color or pattern of a region 56 that has been gazed at and that includes the candidate lesion portion that has been gazed at in the medical image to be interpreted (52B). the display control unit 36 performs interactive display control so as to imitate an eraser gradually erasing the region 56 that has been gazed at as the line of sight moves (paragraph 0098). In other words, marks 52A and 52B are read as areas of interest (candidate lesion (paragraph 0097) with “NOT GAZED” read as non-attention region); Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Hayashi et al to include: erase a result of detecting a region of interest having the structure related to the region of attention among the regions of interest from the medical image to generate a non-attention image; The reason of doing so would to accurately identify a particular region of interest for analysis. Regarding claim 6, Hayashi et al teaches wherein the processor is configured to specify the region of attention based on a method of displaying the medical image during interpretation of the medical image (visual-line detection unit 44B acquires the visual line detection image data transmitted from the imaging device 8, and detects the visual line of the observer (an operator of the endoscope 1) directed to the display device 7 (Paragraph 0070). processing unit 44C performs recognition processing, which is the processing for detecting the lesion site from the captured image data and identifying the detected lesion site, on the captured image data acquired by the captured image data acquisition unit 44A (Paragraph 0071) Regarding claim 8, Hayashi et al teaches wherein the region of interest is a region of interest for a plurality of types of diseases (identification of the lesion site refers to identifying the type, nature, or the like of the detected lesion site, such as whether or not the lesion site detected by the detection processing is malignant or benign, what kind of disease if malignant, or how much the degree of progress of the disease is (paragraph 0073). Regarding claim 10, Hayashi et al teaches a medical image processing (endoscope device 100 (paragraph 0033) method comprising: inputting a medical image to a learning model which discriminates whether each of a plurality of pixels in the medical image corresponds to a plurality of regions of interest (The detection of the lesion site refers to finding a site suspected of a lesion, such as a malignant tumor or a benign tumor (lesion candidate region), from the captured image data (paragraph 0072). The identification of the lesion site refers to identifying the type, nature, or the like of the detected lesion site, such as whether or not the lesion site detected by the detection processing is malignant or benign, what kind of disease if malignant, or how much the degree of progress of the disease is (paragraph 0073). The above detection processing and identification processing are performed by an image recognition model (for example, a neural network, a support vector machine, or the like) having a hierarchical structure and a parameter for extracting a feature amount determined by machine learning, deep learning, or the like (paragraph 0074) acquiring a result of detecting the plurality of regions of interest from the learning model, wherein a mark is added to the medical image to surround each of the plurality of regions of interest (The processing unit 44C determines a region (hereinafter, referred to as an attention region) to which an attention is paid on the captured image data IM acquired by the captured image data acquisition unit 44A from the information on the coordinates of the display pixels intersecting with the operator's visual line on the display surface of the display device (paragraph 0078); detecting, on the medical image, a position at which an operation is performed by an input device and specifying a region surrounding the detected position among the plurality of regions of interest as a region of attention that a user has paid attention to (The processing unit 44C determines a region (hereinafter, referred to as an attention region) to which an attention is paid on the captured image data IM acquired by the captured image data acquisition unit 44A (paragraph 0078); specifying a non-attention region of interest, which is a region of interest having a structure different from a structure related to the region of attention, among the plurality of regions of interest (processing unit 44C executes recognition processing using the above-described image recognition model only on the attention region of the captured image data IM determined in this way, and does not execute the above-described recognition processing on the region (hereinafter, referred to as a non-attention region) of the captured image data IM excluding the attention region (paragraph 0081) Note: lesion site P1 is the region of interest and lesion sites P2-P4 would be non-attention region of interest (paragraph 0083-0085), which are of different sizes (fig 4); and displaying the non-attention image comprising a result of specifying the non-attention region of interest on a display (the lesion sites P2 and P3 are not displayed in a highlighted manner or the identification result is not displayed (paragraph 0085) Note: although the lesion sites P2 and P3 (non-attention region of interest) are not highlighted, they are still displayed (fig 5). Hayashi et al fails to teach of erasing the mark added to the region of attention from the medical image and maintaining the mark added to the non-attention region of interest to generate a non-attention image; Okabe et al teaches erasing the mark added to the region of attention from the medical image and maintaining the mark added to the non-attention region of interest to generate a non-attention image ((a message “Not gazed!” (mark) is displayed along with marks 52A and 52B on the display unit 24 (fig 7 and paragraph 0092). Note: “Not gazed” is read as a mark in a non-attention region). display control is performed to enable the person who reads the image to easily identify only a candidate lesion portion that has not been gazed at among a plurality of candidate lesion portions. display control unit 36 causes the display unit 24 not to display the mark indicating a candidate lesion portion that the person who reads the image has gazed at (for example, 52B) among the plurality of candidate lesion portions and causes the display unit 24 to display only the mark indicating a candidate lesion portion that the person who reads the image has not gazed at (for example, mark 52A) (fig 3 and 7 and paragraph 0097). In a plurality of candidate lesion portions are present in the medical image 50 to be interpreted and the plurality of marks 52A and 52B are generated as the support information. display control unit 36 sequentially changes the color or pattern of a region 56 that has been gazed at and that includes the candidate lesion portion that has been gazed at in the medical image to be interpreted (52B). the display control unit 36 performs interactive display control so as to imitate an eraser gradually erasing the region 56 that has been gazed at as the line of sight moves (paragraph 0098). In other words, marks 52A and 52B are read as areas of interest (candidate lesion (paragraph 0097) with “NOT GAZED” read as non-attention region); Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Hayashi et al to include: erasing the mark added to the region of attention from the medical image and maintaining the mark added to the non-attention region of interest to generate a non-attention image; The reason of doing so would to accurately identify a particular region of interest for analysis. Regarding claim 11, Hayashi et al teaches a non-transitory computer-readable storage medium that stores a medical image processing program causing a computer to execute (non-transitory computer readable recording medium storing an inspection support program (paragraph 0016): inputting a medical image to a learning model which discriminates whether each of a plurality of pixels in the medical image corresponds to a plurality of regions of interest (The detection of the lesion site refers to finding a site suspected of a lesion, such as a malignant tumor or a benign tumor (lesion candidate region), from the captured image data (paragraph 0072). The identification of the lesion site refers to identifying the type, nature, or the like of the detected lesion site, such as whether or not the lesion site detected by the detection processing is malignant or benign, what kind of disease if malignant, or how much the degree of progress of the disease is (paragraph 0073). The above detection processing and identification processing are performed by an image recognition model (for example, a neural network, a support vector machine, or the like) having a hierarchical structure and a parameter for extracting a feature amount determined by machine learning, deep learning, or the like (paragraph 0074) acquiring a result of detecting the plurality of regions of interest from the learning model, wherein a mark is added to the medical image to surround each of the plurality of regions of interest (The processing unit 44C determines a region (hereinafter, referred to as an attention region) to which an attention is paid on the captured image data IM acquired by the captured image data acquisition unit 44A from the information on the coordinates of the display pixels intersecting with the operator's visual line on the display surface of the display device (paragraph 0078); detecting, on the medical image, a position at which an operation is performed by an input device and specifying a region surrounding the detected position among the plurality of regions of interest as a region of attention that a user has paid attention to (The processing unit 44C determines a region (hereinafter, referred to as an attention region) to which an attention is paid on the captured image data IM acquired by the captured image data acquisition unit 44A (paragraph 0078); specifying a non-attention region of interest, which is a region of interest having a structure different from a structure related to the region of attention, among the plurality of regions of interest (processing unit 44C executes recognition processing using the above-described image recognition model only on the attention region of the captured image data IM determined in this way, and does not execute the above-described recognition processing on the region (hereinafter, referred to as a non-attention region) of the captured image data IM excluding the attention region (paragraph 0081) Note: lesion site P1 is the region of interest and lesion sites P2-P4 would be non-attention region of interest (paragraph 0083-0085), which are of different sizes (fig 4); and displaying a result of specifying the non-attention region of interest on a display (the lesion sites P2 and P3 are not displayed in a highlighted manner or the identification result is not displayed (paragraph 0085) Note: although the lesion sites P2 and P3 (non-attention region of interest) are not highlighted, they are still displayed (fig 5). Hayashi et al fails to teach of erasing the mark added to the region of attention from the medical image and maintaining the mark added to the non-attention region of interest to generate a non-attention image; Okabe et al teaches erasing the mark added to the region of attention from the medical image and maintaining the mark added to the non-attention region of interest to generate a non-attention image ((a message “Not gazed!” (mark) is displayed along with marks 52A and 52B on the display unit 24 (fig 7 and paragraph 0092). Note: “Not gazed” is read as a mark in a non-attention region). display control is performed to enable the person who reads the image to easily identify only a candidate lesion portion that has not been gazed at among a plurality of candidate lesion portions. display control unit 36 causes the display unit 24 not to display the mark indicating a candidate lesion portion that the person who reads the image has gazed at (for example, 52B) among the plurality of candidate lesion portions and causes the display unit 24 to display only the mark indicating a candidate lesion portion that the person who reads the image has not gazed at (for example, mark 52A) (fig 3 and 7 and paragraph 0097). In a plurality of candidate lesion portions are present in the medical image 50 to be interpreted and the plurality of marks 52A and 52B are generated as the support information. display control unit 36 sequentially changes the color or pattern of a region 56 that has been gazed at and that includes the candidate lesion portion that has been gazed at in the medical image to be interpreted (52B). the display control unit 36 performs interactive display control so as to imitate an eraser gradually erasing the region 56 that has been gazed at as the line of sight moves (paragraph 0098). In other words, marks 52A and 52B are read as areas of interest (candidate lesion (paragraph 0097) with “NOT GAZED” read as non-attention region); Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Hayashi et al to include: erasing the mark added to the region of attention from the medical image and maintaining the mark added to the non-attention region of interest to generate a non-attention image; The reason of doing so would to accurately identify a particular region of interest for analysis. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayashi et al US 20200234070 in view of Okabe et al US 20180177446 further in view of Nakatsugawa et al US 2020/0082931. Regarding claim 5, Hayashi et al in view of Okabe et al teaches all of the limitations of claim 1 Hayashi et al in view of Okabe et al fails to teach wherein the processor is configured to specify the region of attention based on a document regarding the medical image. Nakatsugawa et al teaches wherein the processor is configured to specify the region of attention based on a document regarding the medical image (interpretation report 50, the medical image 40 to be interpreted and various kinds of information regarding the medical image 40 are associated with each other. Various kinds of information regarding the medical image 40 include the type of the modality 12 used for imaging, the name of the patient 36 to be imaged, the patient ID, the imaging part or direction, imaging date and time, the name of an interpretation report creator, a creation date, and the like. In addition, a finding 64 (region of attention) on the medical image is also included in the various kinds of information regarding the medical image 40. a region to which attention is to be paid (region of interest) 60 is surrounded by an indicator 62 in the medical image 40 (paragraph 0038) Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Hayashi et al in view of Okabe et al to include: wherein the processor is configured to specify the region of attention based on a document regarding the medical image; The reason of doing so would to accurately identify a particular region of interest for analysis. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayashi et al US 20200234070 in view of Okabe et al US 20180177446 further in view of McGinnis et al US 2011/0206250. Regarding claim 7, Hayashi et al in view of Okabe et al teaches all of the limitations of claim 1 Hayashi et al in view of Okabe et al fails to teach wherein the processor is configured to display a result of detecting the region of interest having the structure related to the region of attention, the region of interest of which a feature amount derived at a time of detection is equal to or greater than a predetermined threshold value. McGinnis et al teaches wherein the processor is configured to display a result of detecting the region of interest having the structure related to the region of attention, the region of interest of which a feature amount derived at a time of detection is equal to or greater than a predetermined threshold value (ROI may be annotated with an image mark if the polyp probability score (feature) computed for the ROI exceeds a threshold (predetermined threshold) determined as a function of a system operating point. a label that designates a specific class assignment for a region of interest may be provided, which may be particularly useful if classification information regarding more than one type of region of interest (e.g., suspicious polyps, suspicious stool) should be displayed (result of detecting the region of interest) (paragraph 0106). Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Hayashi et al in view of Okabe et al to include: wherein the processor is configured to display a result of detecting the region of interest having the structure related to the region of attention, the region of interest of which a feature amount derived at a time of detection is equal to or greater than a predetermined threshold value; The reason of doing so would to accurately identify a particular region of interest for analysis for various organs. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayashi et al US 20200234070 in view of Okabe et al US 20180177446 further in view of Hares et al US 2021/0224991. Regarding claim 9, Hayashi et al in view of Okabe et al teaches all of the limitations of claim 1 Hayashi et al in view of Okabe et al fails to teach wherein the region of interest is a region of interest for a plurality of types of organs. Hares teaches wherein the region of interest is a region of interest for a plurality of types of organs (augmentations may be added to highlight areas of interest. Such augmentations may indicate a single point of interest (such as an organ), or multiple points of interest (for example multiple organs or multiple points on the same organ) (paragraph 0118). Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Hayashi et al in view of Okabe et al to include: wherein the region of interest is a region of interest for a plurality of types of organs; The reason of doing so would to accurately identify a particular region of interest for analysis for various organs. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication should be directed to Michael Burleson whose telephone number is (571) 272-7460 and fax number is (571) 273-7460. The examiner can normally be reached Monday thru Friday from 8:00 a.m. – 4:30p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at (571) 270- 3438. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Michael Burleson Patent Examiner Art Unit 2681 Michael Burleson March 20, 2026 /MICHAEL BURLESON/ /AKWASI M SARPONG/SPE, Art Unit 2681 3/24/2026
Read full office action

Prosecution Timeline

Feb 23, 2023
Application Filed
Apr 26, 2025
Non-Final Rejection — §103
May 22, 2025
Interview Requested
May 28, 2025
Applicant Interview (Telephonic)
May 29, 2025
Examiner Interview Summary
Jul 09, 2025
Response Filed
Sep 16, 2025
Non-Final Rejection — §103
Nov 03, 2025
Interview Requested
Nov 10, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Examiner Interview Summary
Dec 11, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603965
PRINTING DEVICE SETTING EXPANDED REGION AND GENERATING PATCH CHART PRINT DATA BASED ON PIXELS IN EXPANDED REGION
2y 5m to grant Granted Apr 14, 2026
Patent 12585826
DOCUMENT AUTHENTICATION USING ELECTROMAGNETIC SOURCES AND SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12566125
SEQUENCER FOCUS QUALITY METRICS AND FOCUS TRACKING FOR PERIODICALLY PATTERNED SURFACES
2y 5m to grant Granted Mar 03, 2026
Patent 12561548
SYSTEM SIMULATING A DECISIONAL PROCESS IN A MAMMAL BRAIN ABOUT MOTIONS OF A VISUALLY OBSERVED BODY
2y 5m to grant Granted Feb 24, 2026
Patent 12562549
LIGHT EMITTING ELEMENT, LIGHT SOURCE DEVICE, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY, AND BIOLOGICAL INFORMATION ACQUISITION APPARATUS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
75%
Grant Probability
68%
With Interview (-6.1%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 489 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month