Prosecution Insights
Last updated: April 18, 2026
Application No. 18/136,548

ENDOSCOPIC DEVICE, FRAME IMAGE EXTRACTION METHOD, COMPUTER-READABLE MEDIUM, AND ENDOSCOPIC SYSTEM

Final Rejection §102§103
Filed
Apr 19, 2023
Examiner
GHIMIRE, SHANKAR RAJ
Art Unit
3795
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Evident Corporation
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
207 granted / 272 resolved
+6.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
318
Total Applications
across all art units

Statute-Specific Performance

§101
1.3%
-38.7% vs TC avg
§103
44.3%
+4.3% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 272 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 03/09/2026 has been entered. Claims 1-11, and 14 are pending. Applicant’s amendment to the claims have overcome objections previously set forth in the Non-Final Office Action notified on 12/09/2025. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: operation receiving unit, recited in claims 1-2, 5, 9, and 14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 3, 5-11, 14 is/are rejected under 35 U.S.C. 102 as being anticipated by Taniguchi (US 20140303435). Regarding claim 1, Taniguchi discloses an endoscopic device (FIGS. 2-3) comprising: an insertion portion configured to be (endoscope 2; FIGS. 2-3) inserted into a test subject (the entirety of the endoscope can be inserted into a subject) and includes an imaging element (imaging unit 21); an operation receiving unit (receiving unit 3; FIG. 2) configured to receive operations from a user (The receiving unit 3 includes operating unit 35 for inputting various setting information and the like by a user. Para [0047]); and a processor (signal processing apparatus 5 that receives image signal from signal processing unit 23; FIG. 2), wherein the processor, during recording processing of a moving image (After being swallowed by the subject 10, the capsule endoscope 2 sequentially captures images of living body sites (an esophagus, a stomach, a small intestine, a large intestine, and the like) at predetermined time intervals (for example, 0.5 second time intervals) while moving inside the digestive tract of the subject 10; para [0040]) including a plurality of frame images generated on a basis of an imaging signal output from the imaging element, among the plurality of frame images, adds, as a tag (comparing unit 54e adds tags/flags/annotation; para [0075]), information regarding an operation received from a user to a frame image corresponding to a timing when the operation receiving unit receives the operation (A flag - “a careful observation flag” is added based on the imaging time periods for a series of captured images; para [0088], [0094], [0105]; User inputs various setting information; para [0047]; Each of the items/labels added to the images may be set by the user to be customized. Para [0173]), and adds, as a tag, information regarding a specific feature image to a frame image recognized as the specific feature image (A predetermined flag is added based on a reference value; para [0075]), and extracts a frame image from among the plurality of frame images included in the moving image on a basis of at least one type of tag among two or more types of tags selected from among a plurality of types of the tags (The display of the images is done based on the predetermined flag in a format for attracting the user's attention. Para [0076]). Regarding claim 3, Taniguchi discloses wherein the specific feature image is determined on a basis of a content of a tag related to the specific feature image added (The display of the images is done based on the predetermined flag in a format for attracting the user's attention. Para [0076]) by the processor to a frame image included in a moving image recorded by recording processing performed previously with respect to a test subject of a same type. Regarding claim 5, Taniguchi discloses wherein the two or more types of tags are selected according to a tag selection operation received by the operation receiving unit (“A careful observation flag” is added; Para [0088]; A predetermined flag is added based on a reference value; para [0075]; A parameter indicating an abnormal site (a region where no villi are present, a region where villi are raised, or a region where a form of the villi has changed (been enlarged or the like)) may be calculated and an image having the parameter within a predetermined range may be extracted as the feature image. Para [0073]). Regarding claim 6, Taniguchi discloses a display device that displays tags added to a frame image included in the moving image, wherein the tag selection operation is an operation of selecting the two or more types of tags from the tags displayed by the display device (The display of the images done based on the predetermined flag in a format for attracting the user's attention. Para [0076]). Regarding claim 7, Taniguchi discloses wherein the display device further displays a time bar (time bar d7; Para [0093]) of the moving image and identifiably displays a tag added to a frame image included in the moving image on the time bar (On the time bar d7, areas d10 and d11 corresponding to the images are added with the careful observation flag being displayed to be distinguishable from other areas. FIG. 6; Para [0094]). Regarding claim 8, Taniguchi discloses a display device that displays a time bar of the moving image and identifiably displays a tag added to a frame image included in the moving image on the time bar (FIG. 6; Para [0094]), wherein the tag selection operation is an operation of selecting the two or more types of tags by region designation from among the tags identifiably displayed on the time bar displayed by the display device (Tags are identifiably displayed on the time bar displayed by the display device. Para [0093]-[0094]). Regarding claim 9, Taniguchi discloses to wherein some types of tags among the two or more types of tags are selected in advance (“A careful observation flag” is added in advance; an abnormal site (a region where no villi are present, a region where villi are raised, or a region where a form of the villi has changed (been enlarged or the like)) are calculated in advanced based on the image; Para [0073]), and another type of tag is selected according to a tag selection operation received by the operation receiving unit (A user is capable of selecting a feature image; The image selection unit 54g receives input of a selection signal corresponding to manipulation of the user using the input unit 51, and selects an image corresponding to the selection signal from the present image group and adds a selection flag thereto. para [0149]). Regarding claim 10, Taniguchi discloses wherein the processor extracts a corresponding section in the moving image for each of the two or more types of tags, and performs the extraction on a basis of the section (The display of the images done based on the predetermined flag in a format of attracting the user's attention. Para [0076]; image extraction unit 54c may extract, from a present image group, a present image corresponding to the past image added with the abnormal label. Para [0178]). Regarding claim 11, Taniguchi discloses a display device that displays a frame image extracted by the processor (The display of the images done based on the predetermined flag in a format of attracting the user's attention. Para [0076], [0180], [0181]). Regarding claim 14, Taniguchi discloses an endoscopic system (System of FIG. 1) comprising: an endoscopic device (endoscope 2; FIGS. 2-3); and a control device (control unit 56 in image processing apparatus 5; FIG. 3), wherein the endoscopic device includes: an insertion portion configured to be (endoscope 2 is configured to be entirely inserted into a subject; FIG. 2) inserted into a test subject and includes an imaging element (Imaging unit 21), and an operation receiving unit (signal processing unit 23 receives an operation, the operation has not been defined. ) configured to receive operations from a user, and the control device, during recording processing of a moving image (After being swallowed by the subject 10, the capsule endoscope 2 sequentially captures images of living body sites (an esophagus, a stomach, a small intestine, a large intestine, and the like) at predetermined time intervals (for example, 0.5 second time intervals) while moving inside the digestive tract of the subject 10 by peristaltic movement or the like of organs; para [0040]) including a plurality of frame images generated on a basis of an imaging signal output from the imaging element, among the plurality of frame images, adds, as a tag (comparing unit 54e adds tags/flags/annotation; para [0075]), information regarding an operation received from a user to a frame image corresponding to a timing when the operation receiving unit receives the operation (A flag, “a careful observation flag” is added based on the imaging time periods to a series of captured images; para [0088], [0094], [0105]; User inputs various setting information; para [0047]; Each of the items/labels added to the images may be set by the user to be customized. Para [0173]; operating unit 35; FIG. 2), and adds, as a tag, information regarding a specific feature image to a frame image recognized as the specific feature image (A predetermined flag is added based on a reference value; para [0075]), and extracts a frame image from among the plurality of frame images included in the moving image on a basis of at least one type of tag among two or more types of tags selected from among a plurality of types of the tags (The display of the images is done based on the predetermined flag in a format of attracting the user's attention. Para [0076]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taniguchi (US 20140303435) in view of Kwon (US 20180376072). Regarding claim 2, Taniguchi does not expressly disclose wherein the specific feature image is determined according to a feature image selection operation received by the operation receiving unit before the recording processing of the moving image. Kwon is directed to electronic device and method for guiding image capturing (abstract) and teaches wherein the specific feature image is determined according to a feature image selection operation received by the operation receiving unit before the recording processing of the moving image (Providing, through the display, a first indicator representing a reference for a size of the interest object before obtaining another image; para [0185]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Taniguchi in accordance with the teaching of Taniguchi to perform feature determination before imaging/recording so that memory use could be optimized. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Taniguchi (US 20140303435) in view of Shintani (WO 2023013080). Regarding claim 4, Taniguchi does not expressly disclose wherein the specific feature image is obtained by machine learning of a feature of a frame image to which a specific type of a tag is added, included in a moving image recorded by recording processing performed previously with respect to a test subject of a same type. Shintani is directed to an annotation assistance method (abstract) and teaches wherein the specific feature image is obtained by machine learning of a feature of a frame image to which a specific type of a tag is added (annotation is added and learned by machine learning; The learning unit 51 can be configured by, for example, a computer system capable of deep learning. The learning unit 51 is provided with the annotation candidate image frame group G to which the annotation is applied from the recording unit 31 as teacher data. The learning unit 51 generates an inference model for detecting an object by learning using teacher data. The learning unit 51 can output the generated inference model. ), included in a moving image recorded by recording processing performed previously with respect to a test subject of a same type (By machine learning, the network is capable of assigning and outputting annotation candidate image; FIG. 15; acquiring continuously captured video that comprises a plurality of frames, detecting frames that include an image of a specific target object from among the frames of the acquired video, grouping a series of detected frames as an annotation candidate image frame group, displaying at least one of a start frame and an end frame of the grouped series of frames; abstract; the learning requesting section 26 supplies the annotation candidate image frame group G to which the annotation is applied to the learning section 51 to request learning.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Taniguchi to include machine learning system in accordance with the teaching of Shintani so that machine learning could be applied to image extraction. Response to Arguments Applicant’s arguments submitted on 03/09/2026 have been fully considered. However, they are not persuasive for the reasons stated below. On page 7, lines 11-21, of the argument/remarks applicant argues that the “operations from a user” recited in claim 1 is not the same thing as receiving image data and related information. The Examiner respectfully disagrees. Taniguchi discloses that the receiving unit 3 include operating unit 35 for inputting various setting information and the like. See para [0047]. Further, in para [0173], Taniguchi teaches that the labels added to the images may be set by the user to be customized. The labels can be a distinguishable as landmarks such as an entrance of an stomach, a pylorus, a duodenal bulb etc. These labels can correspond to a timing when the endoscope travels and user inputs the various setting information. Accordingly, applicant’s arguments are not persuasive at this time. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANKAR R GHIMIRE whose telephone number is (571)272-0515. The examiner can normally be reached 8 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan Nguyen can be reached at 571-272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHANKAR RAJ GHIMIRE/Examiner, Art Unit 3795 /ANH TUAN T NGUYEN/Supervisory Patent Examiner, Art Unit 3795 4/6/26
Read full office action

Prosecution Timeline

Apr 19, 2023
Application Filed
Nov 29, 2025
Non-Final Rejection — §102, §103
Mar 09, 2026
Response Filed
Apr 04, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569118
ENDOSCOPE SYSTEM AND PACKAGING MATERIALS FOR ENDOSCOPE
2y 5m to grant Granted Mar 10, 2026
Patent 12558180
METHOD FOR CONTROLLING A MOVEMENT OF A MEDICAL DEVICE IN A MAGNETIC FIELD
2y 5m to grant Granted Feb 24, 2026
Patent 12557973
OPTICAL UNIT, IMAGE PICKUP UNIT, AND ENDOSCOPE
2y 5m to grant Granted Feb 24, 2026
Patent 12557976
DEVICES AND METHODS FOR TREATMENT OF BODY LUMENS
2y 5m to grant Granted Feb 24, 2026
Patent 12533014
Disposable Introducer for Advancing an Elongate Member into a Tubular Structure
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
96%
With Interview (+19.4%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 272 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month