Prosecution Insights
Last updated: April 19, 2026
Application No. 18/035,918

DEVICES AND EXPERT SYSTEMS FOR INTUBATION AND BRONCHOSCOPY

Non-Final OA §102§103§112
Filed
May 08, 2023
Examiner
LEUBECKER, JOHN P
Art Unit
3795
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Regents Of The University Of Minnesota
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
85%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
613 granted / 820 resolved
+4.8% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
851
Total Applications
across all art units

Statute-Specific Performance

§101
1.0%
-39.0% vs TC avg
§103
36.9%
-3.1% vs TC avg
§102
29.8%
-10.2% vs TC avg
§112
26.2%
-13.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 820 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the a) “user interface” (claim 62,78); b) “the policy block or the processor relays and processes the set of actions to take through a convolution neural network layer prior to relaying the set of actions to take to the distal end via an IOT block” (claim 66) (see 112(a) rejection below); c) “supporting body frame” and “transport components” (claim 68); d) “members of a group including drone flight” and “members of a group including wheels” (claims 69 and 70)(see 112(a) rejection below); must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 63 and 75-82 are objected to because of the following informalities: a) as to claim 63, phrase “the policy block is configured to receive the representations from the vision block deriving a set of actions to take” is grammatically awkward (suggested: “the policy block is configured to receive the representations from the vision block and derive a set of actions to take”); b) as to claims 75-82, these claims are not consecutively ordered (see 37 C.F.R. 1.75) because claim 74 is not present. Thus, claim 75 should renumbered as claim 74, claim 76 should be renumbered as claims 75, etc. Be aware that claims from which these renumbered claims depend might require renumbering also. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 66, 69-70, and 82 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 66 and 82 recites that “the policy block or the processor relays and processes the set of actions to take through a convolution neural network layer prior to relaying the set of actions to take to the distal end via an IOT block”. Applicant’s disclosure only describes that raw sensor data (e.g. surface maps) (204) is input into a convolutional neural network block (208) which in turn outputs the set of actions to take (214) via a policy and action evaluation block (210) (see Fig.2, and specification at page 9, lines 15-19 and page 12, lines 15-22). Such “set of actions to take” (214) is then relayed to the distal end via metal wires using the IOT chip (specification at page 11, lines 9-17). Thus, Applicant’s disclosure does not support that the derived “set of actions” are sent through another convolutional neural network layer prior to being sent to the robotic scope, or why this is required and what effect this will have. Claims 69 and 70 appear to recite that the one or more members of the transport components (e.g. “drone flight” and “wheels”) are controlled “to navigate at least the distal end of the tip [of the robotic scope] based upon data from the plurality of sensors”. However, Applicant’s disclosure appears to describe the transport components as means for facilitating transportation of the robotic scope to a user location (e.g. hospital). Page 9, lines 20-25 of Applicant’s specification reads: In some embodiments, the robotic scope 100 can be mounted on a supporting body frame. In order to facilitate transportation of the mounted device, various features or their combination can be incorporated: For example, the device can fly to user location like a drone or use different types of wheels to improve handling, speed and firmer ride, such as larger diameter wheels, tank treads or caterpillar tracks, large volume low pressure tires that include shock absorbent aspects, or low surface area contact three-dimensional wheels. (underlining added). This offers no insight as to how “drone flight” or “wheels” are used to affect navigation of the distal end of the robotic scope into a body cavity (e.g. trachea) based on the sensor data. Instead, the drone or wheels appear to only offer a means for transporting the robotic scope to a location to be used. Furthermore, Applicant’s specification also fails to describe that the transport components (e.g. wheels or drone flight) are controlled “manually by a user” (claim 69) or “autonomously” (claim 70), or how this is done. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 67, 69-70 and 78 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As to claim 67, phrase “performs interference on a trained module” is indefinite as to meaning, particularly since it is unclear what “interference” is with respect to the claimed invention, or to what a “trained module” is referring. It appears that this should have been “inference on a trained model” and will be interpreted as such for the purposes of applying prior art. As to claim 69, phrase “one or more members of a group including drone flight” is indefinite since it is not clear how a “member” can be an action (“drone flight”). In other words, it is not clear what structure this encompasses. Furthermore, the phrase “one or more members…that are controlled manually by a user to navigate at least the distal end of the tip based upon data from the plurality of sensors to the user interface, which derive a set of action to take” is indefinite as to meaning. For instance, the phrases “to the user interface” and “which derive a set of actions to take” are unclear as to what is going “to the user interface” and what is deriving “a set of actions to take”. As to claim 70, phrase “one or more members of a group including drone flight” is indefinite since it is not clear how a “member” can be an action (“drone flight”). In other words, it is not clear what structure this encompasses. Furthermore, the phrase “one or more members…that are controlled autonomously to navigate at least the distal end of the tip based upon data from the plurality of sensors to the user interface, which derive a set of action to take” is indefinite as to meaning. For instance, the phrases “to the user interface” and “which derive a set of actions to take” are unclear as to what is going “to the user interface” and what is deriving “a set of action to take”. As to claim 78, term “the distal end” (line 3) lacks antecedent basis. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “vision block” in claims 63 and 79; “policy block” in claims 63 and 79; “autonomous agent” in claims 64 and 80; “policy and action evaluation block” in claims 65 and 81; “IOT block” in claims 66 and 82. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 62-65, 67-68, 71-73, 75-76, and 78-81 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yeung et al. (US 2018/0296281, hereinafter “Yeung”). As to claim 62, Yeung discloses a device for endotracheal intubation or diagnosis comprising: a user interface (GUI, [0165]); a processor (one or more processors, [0108]); a robotic scope (robotic endoscope, [0094],Fig.1B) comprising a plurality of articulation joints (distal end of scope includes a bending section that can be actuated in one or more directions, [0104]) and a plurality of sensors (image sensors, [0099],[0007] and other sensors [0099]), and defining a proximal end and a distal end (distal end shown in Fig.1B, the opposite end is the proximal end), the distal end of the robotic scope having a tip that includes a subset of the plurality of sensors (at least one image sensor is located at tip, Figs.1A,3), and wherein the plurality of sensors are in communication with the processor (sensor data sent to steering control processor, [0114]) such that the processor relays information based upon data from the plurality of sensors to the user interface (user interface receives output sensor data and/or steering control data from processor, [0165]). As to claim 63, Yeung further discloses an autonomous agent (processor functions) including a vision block and a policy block; the vision block is configured to receive data from the plurality of sensors and extract representations of structures and pathways around the distal end of the device (processors receive sensor data, including image data and proximity sensor data, which is representative of structures and pathways (e.g. walls of lumen, center of lumen, obstacles, etc., e.g. [0084]), and the policy block is configured to receive the representations from the vision block deriving a set of actions to take (processor uses information from vision block to derive steering and translational movement control actions, e.g. [0084],[0097]). As to claim 64, wherein the autonomous agent can use input from the vision block, including at least one of past airway data or current surface map data, to inform one or more sets of neural network parameters (at least the first input data (video images from image sensor) is input to a machine learning architecture, which can be “an artificial neural network, a recurrent neural network, or a recurrent convolutional neural network”, which can be trained with past or present data, such as topography data, [0007],[0086],[0115],[0137]). As to claim 65, wherein a policy and action evaluation block receives as input the neural network parameters to compute a next set of policies and actions to take (“generating a steering control output signal based on an analysis of data derived from the first input data stream using machine learning architecture”, [0008]). As to claim 67, wherein the policy block or the processer performs inference on a trained model (input and inferred output control signals on a trained dataset, e.g. [0086]). As to claim 68, wherein the robotic scope is mounted on a supporting body frame and includes one or more transport components (steering control system, which includes an actuation unit 203, Fig.2, comprising one or more actuators, [0018], is operable coupled to the robotic scope and is configured to provide advancing movement ([0107]), the actuator(s) for advancing movement providing a frame with transport components). As to claim 71, wherein the plurality of sensors includes a plurality of optical sensors, non-optical sensors, or visual sensors (at least two image (optical or visual) sensors and proximity (non-optical or optical) sensors, [0007],[0115],[0084]). As to claim 72, wherein the plurality of sensors includes at least one non-optical sensor (various sensors in the main body such as touch proximity sensors, motion sensors, locations sensors, [0113],[0210]) that receives information during navigation of the robotic scope (locational, positional and distance information, [0087],[0210]) to provide 3D imaging for a user to view on a display, processed by the processor (reconstruct a three-dimensional image map of the interior of the lumen, [0087],[0208]). As to claim 73, Yeung discloses that the plurality of sensors includes one or more visual sensors either mounted on the side or tip of the robotic scope (first image sensor, [0007]) and further includes one or more cameras (second image sensor, [0007]), wherein each camera covers one of four possible sideway movements of the robotic scope (movement of the distal end of the robotic scope in a sideways direction will be reflected by movement of the image with respect to the field of view of the second image sensor in that same direction). As to claim 75, wherein a plurality of articulation joints provide articulation along the distal portion of the device, which provides at least two degrees of freedom and control of movement of the tip during navigation of the robotic scope (the bending section is designed to perform omnidirectional bending, [0106] and thus provides at least two degrees of freedom). As to claim 76, wherein at least about six degrees of freedom and control is provided. (the bending section is designed to perform omnidirectional bending, [0106] and thus provides at least 6 degrees of freedom). As to claim 78, Yeung discloses a method for use of a device for endotracheal intubation or diagnosis comprising: inserting at least a distal portion of a robotic scope into a cavity of a patient (robotic endoscope, [0094],Fig.1B, is inserted into a body cavity, [0094]); sensing by a plurality of sensors, positioned about the distal portion, device and patient information (image sensors, [0099],[0007] and other sensors [0099],[0110] positioned about the distal portion, Figs.3,4, acquires information about the device in the patient, [0084],[0115]); having a plurality of subsets of the plurality of sensors positioned at a tip of the distal end of the robotic scope (at least one image sensor and at least one proximity sensor can be located at the tip, [0110], Figs.3,4); and communicating, by the plurality of sensors, with a processor such that the processor relays the information based upon data from the plurality of sensors to a user interface (sensor data sent to steering control processor, [0114], and a user interface receives output sensor data and/or steering control data from processor, [0165]). As to claim 79, Yeung further discloses an autonomous agent (processor functions) including a vision and a policy block; receiving, by the vision block, data from the plurality of sensors and extracting representations of structures and pathways around the distal end of the device; receiving, by the policy block, the representations from the vision block; and deriving a set of actions to take (processors receive sensor data, including image data and proximity sensor data, which is representative of structures and pathways (e.g. walls of lumen, center of lumen, obstacles, etc., e.g. [0084] and the processor uses information from vision block to derive steering and translational movement control actions, e.g. [0084],[0097]). As to claim 80, Yeung further discloses using, as input, from the vision block, the vision block including at least one of past airway data or current surface map data, to inform one or more sets of neural network parameters, by the autonomous agent (at least the first input data (video images from image sensor) is input to a machine learning architecture, which can be “an artificial neural network, a recurrent neural network, or a recurrent convolutional neural network”, which can be trained with past or present data, such as topography data, [0007],[0086],[0115],[0137]). As to claim 81, Yeung further discloses receiving, by a policy and action evaluation block, the neural network parameters to compute a next set of policies and actions to take (“generating a steering control output signal based on an analysis of data derived from the first input data stream using machine learning architecture”, [0008]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 77 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yeung et al. (US 2018/0296281, hereinafter “Yeung”) in view of Belson et al. (US 2005/0020901, hereinafter “Belson”). As to claim 77, Yeung shows in Fig.3 what might appear to be channels (e.g. working or fluid channels) in addition to the camera and illumination elements but fails explicitly describe a separate channel. However, Belson teaches, in a similar steerable robotic scope device (see Fig.1), that flexible endoscopes used in surgical procedures in the body typically include separate channels ([0006],[0044]) that can be used for insertion of instruments, insufflation, irrigation, provision of air and water or application of vacuum (any open channel consistent with any of these uses would also be capable of delivering oxygen through passive ventilation and suction). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included at least one channel in the robotic scope of Yeung to increase the functionality of the scope by enabling instruments, fluids or suction to be used in any procedure requiring such implements. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See reference cited on PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P LEUBECKER whose telephone number is (571)272-4769. The examiner can normally be reached Generally, M-F, 5:30-2:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan T Nguyen can be reached at 571-272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P LEUBECKER/Primary Examiner, Art Unit 3795
Read full office action

Prosecution Timeline

May 08, 2023
Application Filed
Sep 26, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599291
OPTICAL INSTRUMENT CONFIGURED TO SWITCH BETWEEN AN INTEGRATED AND EXTERNAL LIGHT SOURCE
2y 5m to grant Granted Apr 14, 2026
Patent 12593968
SYSTEMS AND METHODS FOR DATA COMMUNICATION VIA A LIGHT CABLE
2y 5m to grant Granted Apr 07, 2026
Patent 12582509
DEVICE AND METHOD FOR SUBGINGIVAL MEASUREMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12580077
SYSTEMS AND METHODS FOR IDENTIFYING THE NATURE OF DEFECTS IN MEDICAL SCOPES, AND DETERMINING SERVICING AND/OR FUTURE USE OF THE SCOPES
2y 5m to grant Granted Mar 17, 2026
Patent 12575717
DEVICE DELIVERY TOOLS AND SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
85%
With Interview (+10.6%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 820 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month