Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,565

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, ENDOSCOPE SYSTEM, AND REPORT CREATION SUPPORT DEVICE

Final Rejection §101§103
Filed
Mar 27, 2024
Examiner
SOREY, ROBERT A
Art Unit
3682
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fujifilm Corporation
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
94%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
220 granted / 456 resolved
-3.8% vs TC avg
Strong +46% interview lift
Without
With
+45.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
25 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims In the amendment filed 11/26/2025 the following occurred: Claims 1-13, 15-17, 19-22, 24-26, 28-32, and 34 were amended. Claims 1-34 are presented for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-34 are drawn to information processing apparatus, and endoscope system, and an information processing method, which is/are statutory categories of invention (Step 1: YES). Independent claim 1 recites acquire images captured in chronological order; cause to display the acquired images in chronological order; input the acquired images to a plurality of recognizers in chronological order; detect a recognizer that has output a specific recognition result, from among the plurality of recognizers; upon detection of the recognizer that has output the specific recognition result, cause to display options for an item corresponding to the detected recognizer; and accept an input of selection for the displayed options. Independent claim 34 recites acquiring images captured in chronological order; causing display the acquired images in chronological order; inputting the acquired images to a plurality of recognizers in chronological order; detecting a recognizer that has output a specific recognition result, from among the plurality of recognizers; upon detection of the recognizer that has output the specific recognition result, causing display options for an item corresponding to the detected recognizer; and accepting an input of selection for the displayed options. The respective dependent claims 2-33, but for the inclusion of the additional elements specifically addressed below, provide recitations further limiting the invention of the independent claim(s). The recited limitations, as drafted, under their broadest reasonable interpretation, cover certain methods of organizing human activity, as reflected in the specification, which states that the “present invention…particularly relates to…a report creation support device which process information on an examination by an endoscope” (see: specification paragraph 1). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions between people, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. The present claims cover certain methods of organizing human activity because they address a problem where “it is necessary to input the information on the site, disease name, characteristics, and the like collectively, it takes time to input, and there is a disadvantage that it is forced to interrupt the examination” (see: specification paragraph 4). The recitations above address this problem by providing “a report creation support device which can efficiently input information necessary for generating a report” (see: specification paragraph 5). Accordingly, the claims recite an abstract idea(s) (Step 2A Prong One: YES). This judicial exception is not integrated into a practical application. The claims are abstract but for the inclusion of the additional elements including an “a first processor, wherein the first processor is configured to:…a first display unit…the first display unit…” (claim 1), “the first processor is configured to…” (claims 2-4, 6, 8-11, 15, 28, and 30), “a plurality of input devices” (claim 2), “from a plurality of input devices…at least one input device…from the plurality of input devices…” (claim 3), “the first processor is configured to…cause the first display unit to…” (claims 4-10, 16-17, 19-22, 24-26, and 29), “the first processor is configured to…the first display unit…” (claim 15), “the first display unit, and cause the first display unit to…” (claim 17), “the first display unit…” (claim 28), “a second processor, wherein the second processor is configured to: cause a second display unit to…” (claim 31), “the second processor is configured to…” (claim 32), “an input device” (claim 33), and “a first display unit…the first display unit…” (claim 34), which are additional elements that are recited at a high level of generality (e.g., the “first processor” performs functions though no more than a statement than that it is “configured to” perform said functions; similarly, the “first display unit” displays though no more than a statement than that it is “configured to cause” said display by the processor; similarly, the “input device”(s) is/are configured though no more than a statement than that input is accepted “from” said input device(s)) such that they amount to no more than mere instruction to apply the exception using generic computer components. See: MPEP 2106.05(f). The claims recite the additional elements of “by an endoscope” (claim 1), “wherein an input device by which selection of the options is input includes at least one of an audio input device, a switch, or a gaze input device” (claim 27), “an endoscope” (claim 33), and “by an endoscope” (claim 34), which are nominal or tangential addition to the abstract idea(s) and amount to extra-solution activity concerning mere data gathering. The addition of an insignificant extra-solution activity limitation does not impose meaningful limits on the claim such that is it not nominally or tangentially related to the invention. In the claimed context, these claimed additional elements are incidental to the performance of the recited abstract idea(s) as outlined in the recitations above. See: MPEP 2106.05(g). The combination of these additional elements is no more than mere instructions to apply the exception using generic computer components and limitations directed toward extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea(s) into a practical application because they do not impose any meaningful limits on practicing the abstract idea(s). Accordingly, the claims are directed to an abstract idea(s) (Step 2A Prong Two: NO). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea(s) into a practical application, using the additional elements to perform the abstract idea(s) amounts to no more than mere instructions to apply the exception using generic components. Mere instructions to apply an exception using generic components cannot provide an inventive concept. See MPEP 2106.05(f). Further, the claimed additional elements, identified above, are not sufficient to amount to significantly more than the judicial exception because they are generic components that are configured to perform well-understood, routine, and conventional activities previously known to the industry. See: MPEP 2106.05(d). Said additional elements are recited at a high level of generality and provide conventional functions that do not add meaningful limits to practicing the abstract idea(s). The originally filed specification supports this conclusion: Paragraph 46-48, where “[0046] [Endoscope] Fig. 3 is a diagram illustrating a schematic configuration of the endoscope. [0047] The endoscope 20 of the present embodiment is an endoscope for a lower digestive organ. As illustrated in Fig. 3, the endoscope 20 is a flexible endoscope (electronic endoscope), and has an insertion part 21, an operation part 22, and a connecting part 23. [0048] The insertion part 21 is a part to be inserted into a hollow organ (large intestine in the present embodiment). The insertion part 21 has a distal end portion 21A, a bendable portion 21B, and a soft portion 21C in order from a distal end side.” Paragraph 58, where “[0058] [Processor Device] The processor device 40 integrally controls the operation of the entire endoscope system. The processor device 40 includes, as a hardware configuration, a processor, a main storage unit, an auxiliary storage unit, a communication unit, and the like. That is, the processor device 40 has a so-called computer configuration as the hardware configuration. For example, the processor is configured by a central processing unit (CPU) and the like. For example, the main storage unit is configured by a random-access memory (RAM) and the like. The auxiliary storage unit is configured by, for example, a flash memory, a hard disk drive (HDD), and the like.” Paragraph 67-68, where “Fig. 7 is a diagram illustrating a schematic configuration of the input device. [0067] The input device 50 constitutes a user interface in the endoscope system 10 together with the display device 70. For example, the input device 50 is configured by a keyboard 51, a mouse 52, a foot switch 53, an audio input device 54, and the like. The foot switch 53 is an operation device that is placed at the feet of the operator and that is operated with the foot. The foot switch 53 outputs a predetermined operation signal in a case of stepping on a pedal. The foot switch 53 is an example of a switch. The audio input device 54 includes a microphone 54A, an audio recognition unit 54B, and the like. The audio input device 54 recognizes the audio that has been input from the microphone 54A, using the audio recognition unit 54B to output the audio. For example, the audio recognition unit 54B recognizes the input audio as a word on the basis of a registered dictionary. Since the audio recognition technology itself is a well-known, so detailed description thereof will be omitted. Note that the function of the audio recognition unit 54B may be provided in the processor device 40. [0068] The input device 50 can include known input devices such as a touch panel and a gaze input device in addition to the above-described devices.” The claims recite the additional elements directed to pre-solution and post-solution activity, as recited and indicated above, each of which amount to extra-solution activity. The specification (e.g., as excerpted above) does not indicate that the additional element(s) provide anything other than well‐understood, routine, and conventional functions when claimed in a merely generic manner (as they are presently). Further, the concept of performing clinical tests on individuals to obtain input for an equation has been identified by the courts as insignificant extra-solution activity. See: MPEP 2106.05(g). Further, the concepts of receiving or transmitting data over a network, such as using the Internet to gather data, and storing and retrieving information in memory have been identified by the courts as well-understood, routine, and conventional activities. See: MPEP 2106.05(d)(II). Viewing the limitations as an ordered combination, the claims simply instruct the additional elements to implement the concept described above in the identification of abstract idea(s) with routine, conventional activity specified at a high level of generality in a particular technological environment. Hence, the claims as a whole, considering the additional elements individually and as an ordered combination, do not amount to significantly more than the abstract idea(s) (Step 2B: NO). Dependent claim(s) 2-33, when analyzed as a whole, considering the additional elements individually and/or as an ordered combination, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea(s) without significantly more. These claims fail to remedy the deficiencies of their parent claims above, and are therefore rejected for at least the same rationale as applied to their parent claims above, and incorporated herein. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-23 and 28-34 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2019/054045 A1 to Oosake in view of U.S. Patent Application Publication 2018/0218499 to Kamon. As per claim 1, Oosake teaches an information processing apparatus comprising (see: Oosake, Fig. 1, page 3, "medical image processing device 14"): a first processor, wherein the first processor is configured to (see: Oosake, page 6, control unit 44 corresponding to a processor): acquires images captured by an endoscope in chronological order (see: Oosake, page 6 and 9, still image 39 described above is taken during photographing of the moving image 38 by the endoscope 10, time elapsing from the start of shooting of the moving image), cause a first display unit to display the acquired images in chronological order (see: Oosake, Fig. 4, page 5, 8-9, display unit 16 displays a moving image 38 or a still image 39, the display unit 16 to display the moving image 38 and the still image 39 during shooting, time elapsing from the start of shooting of the moving image 38 "00:04 Images and the like captured at “21:”, “00:04:23”, and “00:04:32” are displayed); input the acquired images to a plurality of recognizers in chronological order (see: Oosake, page 8-10, first recognizer 41 and second recognizer 42, where the first recognition unit 41 to perform the recognition process of the image in advance, and…the second recognition unit 42 performs the image recognition processing; moving image 38 captured by the endoscope system 9 is input to the first recognizer 41 and the second recognizer 42…first recognizer 41 has a feature extraction unit and a recognition processing unit, and performs image recognition for each of the frame images 38a (or frame images 38a at a constant interval) constituting the moving image 38 to be input; the first recognizer 41 performs the first process insubstantially real time); detect a recognizer that has output a specific recognition result, from among the plurality of recognizers (see: Oosake, page 7 and 10, recognizer result such as category classification of whether the medical image belongs is obtained, where a calculated feature amount is used to detect a lesion (lesion candidate) on the image, or, for example, in any category among a plurality of categories related to the lesion such as “tumor”, “non-tumor”, and “other”; a second recognition result such as category classification; the certainty factor of the recognition result of an image by the second recognizer 42 is determined to be “low”); cause the first display unit to display options for an item corresponding to the detected recognizer (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification); and accept an input of selection for the displayed options (see: Oosake, Fig. 5, page 11-12, the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). Oosake fails to specifically teach that the options displayed are done so upon detection of the recognizer that has output the specific recognition result; however, Kamon teaches a list of blood vessel parameters is displayed by operating a pull-down button such that, in a case where the blood vessel parameter PA relevant to the lesion A and the blood vessel parameter PB relevant to the lesion B can be calculated, “PA” or “PB” is displayed as a list by operating the pull-down button (see: Kamon, Fig. 8; and paragraph 108-109). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the category selection menu, as taught by Oosake, to include a list of blood vessel parameters is displayed by operating a pull-down button such that, in a case where the blood vessel parameter PA relevant to the lesion A and the blood vessel parameter PB relevant to the lesion B can be calculated, “PA” or “PB” is displayed as a list by operating the pull-down button, as taught by Kamon, with the motivation of assisting diagnosis more effectively (see: Kamon, abstract). As per claim 2, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to accept the input of the selection for the displayed options from a plurality of input devices (see: Oosake, Fig. 5, page 5 and 11-12, operation unit 15 uses a keyboard, a mouse or the like connected by wire or wireless to a personal compute; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 3, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to: be able to accept the input of the selection from a plurality of input devices for the displayed option; and set at least one input device that accepts the input of the selection for the options from the plurality of input devices according to the detected recognizer (see: Oosake, Fig. 5, page 5 and 11-12, operation unit 15 uses a keyboard, a mouse or the like connected by wire or wireless to a personal compute; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 4, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to, in a case where the first processor detects that a specific recognizer has output specific recognition result, cause the first display unit to display the options for the item corresponding to the output recognition result (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 5, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to cause the first display unit to display the options while the detected recognizer is outputting specific recognition result (see: Oosake, Fig. 5, page 10-11, and the first recognizer 41 performs the first process insubstantially real time; the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 6, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to, in a case where the first processor detects that a specific recognizer has output specific recognition result, cause the first display unit to display the recognition result output from the detected recognizer (see: Oosake, page 7 and 10, recognizer result such as category classification of whether the medical image belongs is obtained, where a calculated feature amount is used to detect a lesion (lesion candidate) on the image, or, for example, in any category among a plurality of categories related to the lesion such as “tumor”, “non-tumor”, and “other; a second recognition result such as category classification; the certainty factor of the recognition result of an image by the second recognizer 42 is determined to be “low”; Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 7, Oosake and Kamon teach the invention as claimed, see discussion of claim 6, and further teach: wherein the first processor is configured to cause the first display unit to display the recognition result while the recognition result is being output from the detected recognizer (see: Oosake, Fig. 5, page 10-11, and the first recognizer 41 performs the first process insubstantially real time; the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 8, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to, in a case where the first processor detects that a specific recognizer has output specific recognition result, cause the first display unit to display the options for a plurality of items in order (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 9, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to, in a case where the first processor detects that a specific recognizer has output specific recognition result, cause the first display unit to display the options for an item designated from among a plurality of items (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 10, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to, in a case where the first processor detects that a specific recognizer has output specific recognition result, cause the first display unit to display the options in a state where one option is selected in advance (see: Oosake, Fig. 5, page 11-12, the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 11, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to accept the input of the selection for the options for a period set for each of the recognizers (see: Oosake, Fig. 5, page 10-12, the second recognizer 42 is performing the recognition process during the period from the start to the end of the recognition process…and the first recognizer 41 performs the first process insubstantially real time; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 12, Oosake and Kamon teach the invention as claimed, see discussion of claim 11, and further teach: wherein at least one of the recognizers is configured to accept the input of the selection for the options while specific recognition result is being output (see: Oosake, Fig. 5, page 10-12, the second recognizer 42 is performing the recognition process during the period from the start to the end of the recognition process…and the first recognizer 41 performs the first process insubstantially real time; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 13, Oosake and Kamon teach the invention as claimed, see discussion of claim 11, and further teach: wherein at least one of the recognizers is configured to continuously accept the input of the selection for the options after the acceptance of the input of the selection for the options starts, except for a specific period (see: Oosake, Fig. 5, page 10-12, the second recognizer 42 is performing the recognition process during the period from the start to the end of the recognition process…and the first recognizer 41 performs the first process insubstantially real time; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 14, Oosake and Kamon teach the invention as claimed, see discussion of claim 13, and further teach: wherein the specific period is a period in which the input of the selection for the options for the item corresponding to a specific recognizer is being accepted (see: Oosake, Fig. 5, page 10-12, the second recognizer 42 is performing the recognition process during the period from the start to the end of the recognition process…and the first recognizer 41 performs the first process insubstantially real time; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 15, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to, in a case where the first processor detects, while the input of the selection for the options for the item corresponding to a specific recognizer is being accepted, that another specific recognizer has output specific recognition result, switch the options to be displayed on the first display unit to the options for the item corresponding to the newly detected recognizer (see: Oosake, Fig. 5, page 11-13, the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu…determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…the control unit 44 determines whether or not to end the image processing of the category classification of the medical image, and when not ending the image processing (in the case of “No”), transits to step S10 and Steps S10 to S20 are repeatedly executed with the medical image as an object of recognition processing). As per claim 16, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to cause the first display unit to display a figure or a symbol corresponding to the detected recognizer (see: Oosake, page 11-12, In the example shown in FIG. 5,the "cursor" is displayed at the position of the image of "No. 3" and "No. 5" with low confidence. Since the recognition process by the second recognizer 42 is performed for the images of “No. 3” and “No.5” in which the “cursor” is displayed with low confidence, the “cursor” is the second The information indicates that the recognizer 42 has been used. Incidentally, instead of the display of the “cursor”, for example, the display may be made distinguishable by color coding). As per claim 17, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to cause the first display unit to display the images in a first region set on a screen of the first display unit, and cause the first display unit to display the options for the item in a second region set in a different region from the first region (see: Oosake, Fig. 5-6, page 12, by clicking identification information (No. 1, No. 2, No. 3,...) That specifies an image, the image corresponding to the identification information is displayed enlarged by switching the screen of the display unit 16, or It can be enlarged and displayed in another window… The category classification is displayed in the vicinity of the lesion areas 61 and 62 on the screen. In the example shown in FIG. 6, “tumoral” and “stage II” are displayed near the lesion area 61, and “tumoral” and “stage I” are displayed near the lesion area 62). As per claim 18, Oosake and Kamon teach the invention as claimed, see discussion of claim 17, and further teach: wherein the second region is set in a vicinity of a position where a treatment tool appears within the images displayed in the first region (see: Oosake, Fig. 5-6, page 12, by clicking identification information (No. 1, No. 2, No. 3,...) That specifies an image, the image corresponding to the identification information is displayed enlarged by switching the screen of the display unit 16, or It can be enlarged and displayed in another window… The category classification is displayed in the vicinity of the lesion areas 61 and 62 on the screen. In the example shown in FIG. 6, “tumoral” and “stage II” are displayed near the lesion area 61, and “tumoral” and “stage I” are displayed near the lesion area 62). As per claim 19, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to cause the first display unit to display information on the option selected for each item (see: Oosake, Fig. 5, page 5 and 11-12, operation unit 15 uses a keyboard, a mouse or the like connected bywire or wireless to a personal compute; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 20, Oosake and Kamon teach the invention as claimed, see discussion of claim 19, and further teach: wherein the first processor is configured to cause the first display unit to display the information on the option selected for each item while the input of the selection of the options is being accepted (see: Oosake, Fig. 5, page 5 and 11-12, operation unit 15 uses a keyboard, a mouse or the like connected bywire or wireless to a personal compute; the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 21, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein one of the plurality of recognizers is a first recognizer that is configured to detect a specific region of a hollow organ using image recognition, and the first processor is configured to cause the first display unit to display options for selecting a site of the hollow organ as the options for the item corresponding to the first recognizer (see: Oosake, Fig. 5-6, page 3, 5, 7, and 11-12, body cavity; endoscope system…the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification…by clicking identification information (No. 1, No. 2, No. 3,...) That specifies an image, the image corresponding to the identification information is displayed enlarged by switching the screen of the display unit 16, or It can be enlarged and displayed in another window… The category classification is displayed in the vicinity of the lesion areas 61 and 62 on the screen. In the example shown in FIG. 6, “tumoral” and “stage II” are displayed near the lesion area 61, and “tumoral” and “stage I” are displayed near the lesion area 62). As per claim 22, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein one of the plurality of recognizers is a second recognizer that is configured to discriminate a lesion part using image recognition, and is configured to cause the first display unit to display options for findings as the options for the item corresponding to the second recognizer (see: Oosake, page 8, the recognition certainty factor determiner 43 inputs the recognition result (three scores in this example) by the first recognizer 41…second recognizer 42 to execute the medical image recognition process according to the determination result by the recognition certainty factor determiner). As per claim 23, Oosake and Kamon teach the invention as claimed, see discussion of claim 22, and further teach: wherein the options for the findings include at least one of options for a macroscopic item, options for an item regarding a JNET classification, or options for an item regarding a size (see: Oosake, Fig. 5-6, page 11-12, lesion areas 61 and 62 are considered macroscopic items; the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 28, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein the first processor is configured to: refer to a table in which options to be displayed are registered for each item (see: Oosake, page 8 and 11, the first reference value or the second reference value (hereinafter simply referred to as "reference value") may be a preset fixed value or a value set by the user); and cause the first display unit to display the options for the item corresponding to the detected recognizer (see: Oosake, Fig. 5, page 11 and 18, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 29, Oosake and Kamon teach the invention as claimed, see discussion of claim 8, and further teach: wherein, in the table, information on display rank of the options is further registered (see: Oosake, page 8 and 11, the first reference value or the second reference value (hereinafter simply referred to as "reference value"; the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities. It is preferable that the display order of the categories is changed and displayed on the display unit 16) may be a preset fixed value or a value set by the user), and the first processor is configured to cause the first display unit to display the options, in a manner that the options are arranged according to the information on display rank (see: Oosake, Fig. 5, page 11 and 18, determines category priorities of the plurality of categories based on a result of category recognition by the second recognizer on the medical image, and the plurality of categories in the category selection menu according to the category priorities. The medical image processing apparatus according to claim 15, wherein the display order of is changed; the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). As per claim 30, Oosake and Kamon teach the invention as claimed, see discussion of claim 29, and further teach: wherein, the first processor is configured to: record a selection history of the options; and correct the information on display rank registered in the table, based on the selection history (see: Oosake, page 7-8 and 11, the feature amount is calculated from the image by learning…the parameters of the filters used in each convolutional layer are automatically learned in advance by a large number of learning data; the first reference value or the second reference value (hereinafter simply referred to as "reference value") may be a preset fixed value or a value set by the user). As per claim 31, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: A report creation support device that configured to support creation of a report, the report creation support device comprising (see: Oosake, Fig. 1): a second processor, wherein the second processor is configured to cause a second display unit to display a report creation screen with a plurality of input fields (see: Oosake, Fig. 1, page 5 and 13, display device 13 is connected to the processor device 12 and displays the moving image 38 and the still image 39 input from the processor device 12, a personal computer is used as the medical image processing apparatus 14. The operation unit 15 uses a keyboard, a mouse or the like connected by wire or wireless to a personal computer, and the display unit 16 uses various monitors such as a liquid crystal monitor connectable to the personal computer. A diagnosis support device such as a workstation (server) may be used as the medical image processing device 14. In this case, the operation unit 15 and the display unit 16 are provided for each of a plurality of terminals connected to the workstation. Furthermore, as the medical image processing apparatus 14, for example, a medical care operation support apparatus that supports creation of a medical report or the like may be used); acquire information on the options for each item input in the information processing apparatus according to claim 1 (see: Oosake, page 13, Although the processor device 12 and the medical image processing device 14 are separately provided in the above embodiment, the processor device 12 and the medical image processing device 14 may be integrated. That is, the processor device 12 may be provided with a function as the medical image processing device 14); automatically fill the corresponding input field with the acquired information on the options for the item; and accepts correction of the information of the automatically filled input field (see: Oosake, Fig. 5, page 11-12, the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). As per claim 32, Oosake and Kamon teach the invention as claimed, see discussion of claim 31, and further teach: wherein the second processor is configured to display the automatically filled input field to be distinguishable from other input fields on the report creation screen (see: Oosake, page 11-12, In the example shown in FIG. 5,the "cursor" is displayed at the position of the image of "No. 3" and "No. 5" with low confidence. Since the recognition process by the second recognizer 42 is performed for the images of “No. 3” and “No.5” in which the “cursor” is displayed with low confidence, the “cursor” is the second The information indicates that the recognizer 42 has been used. Incidentally, instead of the display of the “cursor”, for example, the display may be made distinguishable by color coding). As per claim 33, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: An endoscope system comprising: an endoscope (see: Oosake, Fig. 1, page 6 and 9, the endoscope 10); the information processing apparatus according to claim 1 (see: Oosake, page 13, Although the processor device 12 and the medical image processing device 14 are separately provided in the above embodiment, the processor device 12 and the medical image processing device 14 may be integrated. That is, the processor device 12 may be provided with a function as the medical image processing device 14); and an input device (see: Oosake, Fig. 5, page 5 and 11-12, operation unit 15 uses a keyboard, a mouse or the like connected by wire or wireless to a personal compute). As per claim 34, Oosake teaches an information processing method comprising: acquiring images captured by an endoscope in chronological order (see: Oosake, page 6 and 9, still image 39 described above is taken during photographing of the moving image 38 by the endoscope 10, time elapsing from the start of shooting of the moving image); causing a first display unit to display the acquired images in chronological order (see: Oosake, Fig. 4, page 5, 8-9, display unit 16 displays a moving image 38 or a still image 39, the display unit 16 to display the moving image 38 and the still image 39 during shooting, time elapsing from the start of shooting of the moving image 38 "00:04 Images and the like captured at “21:”, “00:04:23”, and “00:04:32” are displayed); inputting the acquired images to a plurality of recognizers in chronological order (see: Oosake, page 8-10, first recognizer 41 and second recognizer 42, where the first recognition unit 41 to perform the recognition process of the image in advance, and…the second recognition unit 42 performs the image recognition processing; moving image 38 captured by the endoscope system 9 is input to the first recognizer 41 and the second recognizer 42…first recognizer 41 has a feature extraction unit and a recognition processing unit, and performs image recognition for each of the frame images 38a (or frame images 38a at a constant interval) constituting the moving image 38 to be input; the first recognizer 41 performs the first process insubstantially real time); detecting a recognizer that has output a specific recognition result, from among the plurality of recognizers (see: Oosake, page 7 and 10, recognizer result such as category classification of whether the medical image belongs is obtained, where a calculated feature amount is used to detect a lesion (lesion candidate) on the image, or, for example, in any category among a plurality of categories related to the lesion such as “tumor”, “non-tumor”, and “other; a second recognition result such as category classification; the certainty factor of the recognition result of an image by the second recognizer 42 is determined to be “low”); causing the first display unit to display options for an item corresponding to the detected recognizer (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification); and accepting an input of selection for the displayed options (see: Oosake, Fig. 5, page 11-12, the user can operate the mouse functioning as the operation unit 15 and place the cursor on the “window” and click it to display a category selection menu as a pull-down menu, and when displaying the category selection menu, the control unit 44 determines the category priorities of the plurality of categories based on the category recognition result by the second recognizer 42 for the image, and a plurality of categories in the category selection menu are selected according to the category priorities…The user can select the category classification of the No. 5 image from this category selection menu…the “window” in which the category classification is displayed is information indicating that the category classification of the image is determined by the user…even if the category classification has been determined automatically, you can display the "window" for the category selection menu in the category column by clicking the "Display options" icon button, and the user can manually change the category classification using the category selection menu). Oosake fails to specifically teach that the options displayed are done so upon detection of the recognizer that has output the specific recognition result; however, Kamon teaches a list of blood vessel parameters is displayed by operating a pull-down button such that, in a case where the blood vessel parameter PA relevant to the lesion A and the blood vessel parameter PB relevant to the lesion B can be calculated, “PA” or “PB” is displayed as a list by operating the pull-down button (see: Kamon, Fig. 8; and paragraph 108-109). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the category selection menu, as taught by Oosake, to include a list of blood vessel parameters is displayed by operating a pull-down button such that, in a case where the blood vessel parameter PA relevant to the lesion A and the blood vessel parameter PB relevant to the lesion B can be calculated, “PA” or “PB” is displayed as a list by operating the pull-down button, as taught by Kamon, with the motivation of assisting diagnosis more effectively (see: Kamon, abstract). Claim(s) 24-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2019/054045 A1 to Oosake in view of U.S. Patent Application Publication 2018/0218499 to Kamon further in view of WO 2020/239514 A1 to Torjesen. As per claim 24, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein one of the plurality of recognizers is a third recognizer that is configured to detect using image recognition (see: Oosake, page 8, the recognition certainty factor determiner 43 inputs the recognition result (three scores in this example) by the first recognizer 41…second recognizer 42 to execute the medical image recognition process according to the determination result by the recognition certainty factor determiner), and the first processor is configured to cause the first display unit to display options for a name as the options for the item corresponding to the third recognizer (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). Oosake fails to specifically teach that the detected object is a treatment or a treatment tool; however, Torjesen teaches obtaining a segmented representation of the interventional medical device, where the medical device may be an endoscope (see: Torjesen, abstract, and page 8). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the recognizers, as taught by Oosake and Kamon, to include the obtaining a segmented representation of the interventional medical device, where the medical device may be an endoscope, as taught by Torjesen, with the motivation of, once the shape of the interventional medical device is determined, a mesh of the interventional medical device can be generated and overlaid to enhance visualization (see: Torjesen, page 5). As per claim 25, Oosake and Kamon teach the invention as claimed, see discussion of claim 1, and further teach: wherein one of the plurality of recognizers is a fourth recognizer that is configured to detect using image recognition (see: Oosake, page 8, the recognition certainty factor determiner 43 inputs the recognition result (three scores in this example) by the first recognizer 41…second recognizer 42 to execute the medical image recognition process according to the determination result by the recognition certainty factor determiner), and the first processor is configured to cause the first display unit to display options for a hemostatic method or the number of hemostasis treatment tools as the options for the item corresponding to the fourth recognizer (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). Oosake fails to specifically teach that the detected object is a hemostasis treatment (method) or a hemostasis treatment tool; however, Torjesen teaches obtaining a segmented representation of the interventional medical device, where the medical device may be an endoscope (see: Torjesen, abstract, and page 8). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the recognizers as taught by Oosake and Kamon to include the obtaining a segmented representation of the interventional medical device, where the medical device may be an endoscope, as taught by Torjesen, with the motivation of, once the shape of the interventional medical device is determined, a mesh of the interventional medical device can be generated and overlaid to enhance visualization (see: Torjesen, page 5). As per claim 26, Oosake, Kamon, and Torjesen teach the invention as claimed, see discussion of claim 25, and further teach: wherein is configured to, in a case where a specific method is selected, cause the first display unit to further display the options (see: Oosake, Fig. 5, page 11, the certainty factor of the category classification, the category classification, and the information "display option" indicating the display of the option are displayed... a "window" for a category selection menu is displayed which functions as a classification selection unit for manually selecting a category classification). Oosake fails to specifically teach that the detected object is a hemostatic method or options for the number of hemostasis treatment tools; however, Torjesen teaches obtaining a segmented representation of the interventional medical device, where the medical device may be an endoscope, where constraints may be predetermined constraints that may vary based on the type of surgery, and where identifying an interventional medical device model (see: Torjesen, abstract, and page 8 and 12-13). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the recognizers, as taught by Oosake, Kamon, and Torjesen, to include the obtaining a segmented representation of the interventional medical device, where the medical device may be an endoscope, where constraints may be predetermined constraints that may vary based on the type of surgery, and where identifying an interventional medical device model, as taught by Torjesen, with the motivation of, once the shape of the interventional medical device is determined, a mesh of the interventional medical device can be generated and overlaid to enhance visualization (see: Torjesen, page 5). Claim(s) 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2019/054045 A1 to Oosake in view of U.S. Patent Application Publication 2018/0218499 to Kamon further in view of U.S. Patent Application Publication 2018/0092509 to Yamaki. As per claim 27, Oosake teach the invention as claimed, see discussion of claim 1, but fails to specifically teach the following limitations met by Yamaki as cited: wherein an input device by which selection of the options is input includes at least one of an audio input device, a switch, or a gaze input device (see: Yamaki, paragraph 27, 40, operation signals based on operation of a foot switch (SW) by a surgeon, operation of a scope SW provided on an endoscope, audio input operation by a surgeon are inputted). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the input devices, as taught by Oosake and Kamon, to include operation signals based on operation of a foot switch (SW) by a surgeon, operation of a scope SW provided on an endoscope, audio input operation by a surgeon are inputted, as taught by Yamaki, with the motivation of allowing control of each appliance by a voice of a surgeon (see: Yamaki, paragraph 27) and/or with the motivation of, when a surgeon presses the scope SW at a scene he/she thinks important, duplicating a medical image (see: Yamaki, paragraph 67). Response to Arguments Applicant’s arguments from the response filed on 11/26/2025 have been fully considered and will be addressed below in the order in which they appeared. In the remarks, Applicant argues in substance that (1) the 35 U.S.C. 101 rejections should be withdrawn in view of the amendments because, “as presented herein, fully satisfy the requirements of 35 U.S.C. § 101.” The Examiner respectfully disagrees. Applicant’s arguments are not persuasive. The claims here are not directed to a specific improvement to computer functionality that amount to a practical application. Rather, they are directed to the use of conventional or generic technology in a well-known environment, without any claim that the invention reflects an inventive solution to a technical problem presented by combining the two. In the present case, the claims fail to recite any elements that individually or as an ordered combination transform the identified abstract idea(s) in the rejection into a patent-eligible application of that idea. For example, the “first processor” performs functions though no more than a statement than that it is “configured to” perform said functions, which indicates the first processor additional element is are recited at a high level of generality such that it amounts to no more than mere instruction to apply the exception using generic computer components. In the remarks, Applicant argues in substance that (2) the 35 U.S.C. 102/103 rejections should be withdrawn in view of the amendments because, “"in a case where a specific region is detected by the specific region detection unit 63C, the site selection box 71 is displayed. That is, the site selection box 71 is displayed on the screen with the detection of the specific region by the specific region detection unit 63C as a trigger. The specific region detection unit 63C is an example of a first recognizer" (para. [0090]); "the diagnosis name selection box 72 is displayed on the screen in a case where a predetermined discrimination result is output from the discrimination unit 63B... Therefore, the selection boxes displayed on the screen in a case where a predetermined discrimination result is output from the discrimination unit 63B are the diagnosis name selection box 72 and the findings selection boxes 73A to 73C. The discrimination unit 63B is an example of a second recognizer" (para. [0113]); and "the treatment name selection box 75 is displayed in a case where the treatment tool is detected from the endoscopic image by the treatment tool detection unit 63D. That is, the treatment name selection box 75 is displayed on the screen with the detection of the treatment tool by the treatment tool detection unit 63D as a trigger. The treatment tool detection unit 63D is an example of a third recognizer" (para. [0130]). The Office Action cites Oosake with respect to claim 1 (see, e.g., page 8, lines 8-13 of the Office Action) and the other independent claims. However, Applicant contends that Oosake fails to disclose or suggest at least the above-noted feature of the amended claims. In this regard, referring to US2020/0193236, which is the U.S. counterpart of the version of Oosake cited in the Office Action (WO2019/054045A1), Oosake describes the following: "In a case where it is determined that the confidence level for the recognition result of the image by the second recognizer 42 is "low", the control unit 44 causes the display unit 16 to display a category selection menu or the like" (para. [0098]); "Meanwhile, the confidence level for the category classification based on each of the first and second recognition results of the image of "No. 5" by the first recognizer 41 and the second recognizer 42 is low, and in this case, the field of the category, the "window" for the category selection menu functioning as the classification selection unit that manually selects the category classification is displayed." (para. [0125]); and10 "The user can cause the category selection menu to be displayed as a pull-down menu by operating the mouse functioning as the operation unit 15 to place the cursor on the "window" and clicking the "window" (para. [0126]). Considering the above, Applicant contends that Oosake fails to disclose or suggest the features recited in the amended claims. Since Oosake does not describe all the elements of the independent claims, either explicitly or inherently, an anticipation rejection cannot be maintained. In addition, for at least the above reasons, the cited references, taken alone or in combination, fail to disclose, suggest, or otherwise render obvious the features recited in the independent claims. Thus, Applicant submits that the independent claims are allowable. Applicant further submits that the dependent claims are allowable at least by virtue of their dependence on the independent claims, and for the additional features recited””. Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument – see application of prior art Kamon. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT A SOREY whose telephone number is (571)270-3606. The examiner can normally be reached Monday through Friday, 8am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long can be reached at (571) 270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT A SOREY/Primary Examiner, Art Unit 3682
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Jul 26, 2025
Non-Final Rejection — §101, §103
Oct 28, 2025
Applicant Interview (Telephonic)
Oct 28, 2025
Examiner Interview Summary
Nov 26, 2025
Response Filed
Mar 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603174
METHOD FOR UTILIZING A MEDICAL SERVICES KIOSK
2y 5m to grant Granted Apr 14, 2026
Patent 12597517
METHOD FOR EXTRACTING INTRINSIC PROPERTIES OF CANCER CELLS FROM GENE EXPRESSION PROFILES OF CANCER PATIENTS AND DEVICE FOR THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12592301
PROMPT ENGINEERING AND GENERATIVE AI FOR GOAL-BASED IMAGERY
2y 5m to grant Granted Mar 31, 2026
Patent 12567009
EQUITABLY ASSIGNING MEDICAL IMAGES FOR EXAMINATION
2y 5m to grant Granted Mar 03, 2026
Patent 12555682
MEDICAL SERVICES KIOSK
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
94%
With Interview (+45.8%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month