Prosecution Insights
Last updated: April 19, 2026
Application No. 18/477,081

SYSTEM, METHODS, AND STORAGE MEDIUMS FOR RELIABLE URETEROSCOPES AND/OR FOR IMAGING

Non-Final OA §101§102§103
Filed
Sep 28, 2023
Examiner
BURKE, TIONNA M
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
The Brigham And Women'S Hospital Inc.
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
4y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
233 granted / 431 resolved
-0.9% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
46 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 431 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 23 and 24 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected invention, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on 11/18/25. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/285/23, 2/14/24 and 9/17/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Claims 1, 16 and 22 recite “obtain a three dimensional image of an object, target, or sample”, “acquire positional information of an image capturing tool inserted in or into the object, target, or sample”, “determine, based on the positional information of the image capturing tool and the relative position of the image capturing tool to the object, target, or sample, portions of the object, target, or sample that are a first portion and a second portion, where the first portion corresponds to a portion of the object, target, or sample where the image capturing tool has captured or inspected and the second portion corresponds to a portion of the object, target, or sample where the image capturing tool still is or remains to be captured or to be inspected such that the second portion is uncaptured or uninspected”; and “display, based on the acquired positional information, the first portion of the three dimensional image of the object, target, or sample with a first expression which is different from a second expression of the second portion of the three dimensional image”. The broadest reasonable interpretation of step “determine, based on the positional information of the image capturing tool and the relative position of the image capturing tool to the object, target, or sample, portions of the object, target, or sample that are a first portion and a second portion, where the first portion corresponds to a portion of the object, target, or sample where the image capturing tool has captured or inspected and the second portion corresponds to a portion of the object, target, or sample where the image capturing tool still is or remains to be captured or to be inspected such that the second portion is uncaptured or uninspected” falls within the mental process groupings of abstract ideas because it covers concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Specifically, this step recites determining, based on the positional data, which areas having been examined or unexamined, which may be practically performed in the human mind using observation, evaluation, judgment, and opinion. For example, a user can determine, based on the positions that the tool has been located, which portions have been examined or unexamined and performing an evaluation of the positional data. The limitations “obtain a three dimensional image of an object, target, or sample”, “acquire positional information of an image capturing tool inserted in or into the object, target, or sample” and “display, based on the acquired positional information, the first portion of the three dimensional image of the object, target, or sample with a first expression which is different from a second expression of the second portion of the three dimensional image” are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception. Additional elements “obtain a three dimensional image of an object, target, or sample”, “acquire positional information of an image capturing tool inserted in or into the object, target, or sample” and “display, based on the acquired positional information, the first portion of the three dimensional image of the object, target, or sample with a first expression which is different from a second expression of the second portion of the three dimensional image” were all found to be insignificant extra-solution activity above, because they were determined to be insignificant limitations as necessary data gathering and outputting. These elements amount to receiving data and displaying data are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. Claims 2-15 and 17-21 do not include additional elements to integrate into a practical application. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5, 7-9 and 13-22 rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kyperountas, United States Patent Publication 20220071711. Claim 1: Kyperountas discloses: An information processing apparatus comprising: one or more processors that operate to (see paragraph [0032]). Kyperountas teaches processors: obtain a three dimensional image of an object, target, or sample (see paragraph [0041]). Kyperountas teaches obtaining a 3D images of image during a medical procedure; acquire positional information of an image capturing tool inserted in or into the object, target, or sample (see paragraph [0050]). Kyperountas teaches acquiring positional information of the tool inserted into the body; determine, based on the positional information of the image capturing tool and the relative position of the image capturing tool to the object, target, or sample, portions of the object, target, or sample that are a first portion and a second portion, where the first portion corresponds to a portion of the object, target, or sample where the image capturing tool has captured or inspected and the second portion corresponds to a portion of the object, target, or sample where the image capturing tool still is or remains to be captured or to be inspected such that the second portion is uncaptured or uninspected (see paragraphs [0050]-[0056]). Kyperountas teaches determining with the image capturing tool and the model an area/target that has been examiner and an area that has not been examined yet; and display, based on the acquired positional information, the first portion of the three dimensional image of the object, target, or sample with a first expression which is different from a second expression of the second portion of the three dimensional image (see paragraph [0063] and [0066]). Kyperountas teaches displaying portions of the images showing what areas were and were not yet examined. Claim 2: Kyperountas discloses: wherein the one or more processors further operate to display the second portion and/or the second expression for the second portion along with the first portion and the first expression (see paragraph [0066]). Kyperountas teaches a display displaying the second portion of the image that was not examined along with other portions. Claim 3: Kyperountas discloses: wherein one or more of the following: (i) the object, target, or sample is one or more of the following: an anatomy, a kidney, a urinary system, or a portion of a kidney or urinary system; and/or (ii) CT data is obtained and/or used to segment the object, target, or sample (see paragraph [0030] and [0056]). Kyperountas teaches anatomy of the body and a CT data obtained to view the anatomy. Claim 4: Kyperountas discloses: the first portion corresponds to a portion that the image capturing tool has captured or inspected in a Field-of-View of the image capturing tool, and the second portion corresponds to a portion that remains to be captured or inspected in the Field-of-View of the image capturing tool (see paragraphs [0012] and [0019]). Kyperountas teaches having an inspected area of the image in the Field of View and having a portion of the image in the field-of-view that has not been inspected yet. Claim 5: Kyperountas discloses: wherein the captured or inspected first portion represents or corresponds to an overlap between an imaging Field-of-View (FOV) or a view model of the image capturing tool and the surfaces of the object, target, or sample being imaged (see paragraphs [0048] and [0106]). Kyperountas teaches the first portion corresponding to an overlap between the field-of-view. Claim 7: Kyperountas discloses: receive a predetermined or set acceptable size, or a size within a predetermined or set range, of a missed or uninspected/uncaptured area, the missed or uninspected/uncaptured area being an area or portion that is not captured or inspected, or remains to be captured or inspected, by the image capturing tool (see paragraph [0054]). Kyperountas teaches receiving a size of a region within a threshold of an unexamined area; and display a third portion of the three dimensional image of the object, sample, or target with a third expression which is different from both of the expressions of the first portion and the second portion of the three dimensional image, wherein the third portion corresponds to the missed or uninspected/uncaptured area of which size is equal to or less than the predetermined or set acceptable size (see paragraph [0054]-[0055]). Kyperountas teaches displaying a 3rd portion that is unexamined but also not a region of interest based on the threshold size. Claim 8: Kyperountas discloses: receive a predetermined or set acceptable percentage of a completion of a capturing or inspection of the object, target, or sample (see paragraph [0016]). Kyperountas teaches an indication of an amount that was examined vs unexamined and determining if an alert is sent based on the amount; and indicate a completion of the capturing or inspection of the object, target, or sample, in a case where the percentage of an area or portion captured or inspected by the image capturing tool is equal to or more than the predetermined or set acceptable percentage (see paragraph [0016]). Kyperountas determines whether the amount of acceptable examined area is received and determining if an alert is to be sent. Claim 9: Kyperountas discloses: store time information corresponding to a length of time that a particular portion or area is within the Field-of-View of the image capturing tool (see paragraph [0080] and [0081]). Kyperountas teaches storing time stamps to indicate the amount of time spent in each area of the field of view; and display the three dimensional image of the anatomy with the first expression of the first portion after the image capturing tool has captured or inspected the first portion for a period of time indicated by the stored time information (see paragraphs [0080]-[0082]). Kyperountas teaches displaying the 3D image and model indicating the examined and unexamined portions and the corresponding time stamps. Claim 13: Kyperountas discloses: wherein the one or more processors further operate to: acquire the positional information of the image capturing tool based on a positional information detected by an electromagnetic sensor, the positional information including orientation and position or location information (see paragraph [0050]). Kyperountas teaches magnetometers, accelerometers, and/or the like, which can used to estimate the position and/or direction of orientation of the medical instrument. Claim 14: Kyperountas discloses: wherein the one or more processors further operate to: display, based on a depth map from or based on a captured image captured by the image capturing tool, the three dimensional image of the object, sample, or target (see paragraph [0053]). Kyperountas teaches displaying a depth map based on captured image. Claim 15: Kyperountas discloses: wherein the one or more processors further operate to: display, based on the acquired positional information, the first portion of the three dimensional image of the object, sample, or target with a first color being different from a second color of or used for the second portion of the three dimensional image (see paragraphs [0081] and [0082]). Kyperountas teaches displaying the position of the first portion and second portion with different colors. Claim 16-21: Although Claims 16-21 are method claims, they are interpreted and rejected for the same reasons as the apparatus of Claims 1-4, 14, 15. Claim 22: Although Claim 22 is a non-transitory storage medium claim, it is interpreted and rejected for the same reason as the apparatus of Claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kyperountas, in view of Tam et al., United States Patent Publication 20230190394 (hereinafter “Tam”). Claim 6: Kyperountas fails to teach a view model for viewing during the procedure for the camera tool. Tam discloses: wherein the view model is a cone or other geometric shape being used for the model, and the portion that the image capturing tool has captured or inspected includes inner surfaces of a kidney that are located within the FOV or the view model (see paragraph [0196]). Tam teaches a rectangular geometric shape for the model for the image capturing tool of the kidney. Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention was filed to modify the method disclosed by Kyperountas to include view model for the image capturing tool for the purpose of defining a field of view for effectively capturing images, as taught by Tam. Claim 12: Kyperountas fails to teach a shape sensor. Tam discloses: wherein the one or more processors further operate to: acquire the positional information of the image capturing tool based on a shape sensor of the image capturing tool, the positional information including orientation and position or location information (see paragraph [0152]). Tam teaches a number of other input data can be used by the localization module. An instrument utilizing shape-sensing fiber can provide shape data that the localization module can use to determine the location and shape with the instrument. Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention was filed to modify the method disclosed by Kyperountas to include a shape sensor for the purpose of efficiently defining a shape with the image capturing tool , as taught by Tam. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kyperountas, in view of Kristensen al., United States Patent Publication 20230346199 (hereinafter “Kristensen”). Claim 10: Kyperountas fails to teach the time information is the accumulated during to have overlap between an imaging Field-of-View of the image capturing tool and the surfaces being imaged. Kristensen discloses: wherein the stored time information is the accumulated during to have overlap between an imaging Field-of-View of the image capturing tool and the surfaces of the object, target, or sample being imaged (see paragraphs [0027] and [0040]). Kristensen teaches combining left and right images from overlapping fields of view of an image sensor from a stereo camera, it should be understood that the disclosed technology may also be used in embodiments which provide reconstructions based on images extending beyond individual fields of view of a stereo camera’s image sensors, such as images captured over time. Time is stored for all images and fields of view. Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention was filed to modify the method disclosed by Kyperountas to include the time information is the accumulated during to have overlap between an imaging Field-of-View of the image capturing tool and the surfaces being imaged for the purpose of effectively storing times for overlapping fields of view, as taught by Kristensen. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kyperountas, in view of Ayvali al., United States Patent Publication 20210196399 (hereinafter “Ayvali”). Claim 11: Kyperountas fails to teach obtaining position and location information using a forward kinematics model. Ayvali discloses: wherein the one or more processors further operate to: acquire the positional information of the image capturing tool based on a shape of the image capturing tool calculated based on a forward kinematics model, the positional information including orientation and position or location information (see paragraph [0232]). Ayvali teaches using a forward kinematic model to obtain position information during a kidney procedure. Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention was filed to modify the method disclosed by Kyperountas to include a forward kinematics model for the purpose of effectively receiving position and location data from the procedure, as taught by Ayvali. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIONNA M BURKE/Examiner, Art Unit 2178 1/8/26
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596470
GESTURE-BASED MENULESS COMMAND INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591731
SYSTEM AND METHOD FOR SELECTING RELEVANT CONTENT IN AN ENHANCED VIEW MODE
2y 5m to grant Granted Mar 31, 2026
Patent 12572698
INFRASTRUCTURE METHODS AND SYSTEMS FOR EXTENDING CUSTOMER RELATIONSHIP MANAGEMENT PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12564152
SYSTEM AND METHOD FOR MANAGEMENT OF SENSOR DATA BASED ON HIGH-VALUE DATA MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12547823
DYNAMICALLY AND SELECTIVELY UPDATED SPREADSHEETS BASED ON KNOWLEDGE MONITORING AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
73%
With Interview (+19.3%)
4y 9m
Median Time to Grant
Low
PTA Risk
Based on 431 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month