Prosecution Insights
Last updated: April 19, 2026
Application No. 18/274,689

INFORMATION PROCESSING SYSTEM, PROGRAM, AND INFORMATION PROCESSING METHOD

Final Rejection §102§103
Filed
Jul 27, 2023
Examiner
HWANG, JINSU
Art Unit
2667
Tech Center
2600 — Communications
Assignee
The University of Tokyo
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
85%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
34 granted / 43 resolved
+17.1% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
10 currently pending
Career history
53
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
30.5%
-9.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 12/26/2025 has been entered. The amendment filed of claims 1-4, and 9-14 has been acknowledged. The amendment filed of title has been acknowledged. Response to Arguments Applicant’s arguments, see “Specification” filed 12/05/2025, with respect to title have been fully considered and are persuasive. The objection of title has been withdrawn. Applicant’s arguments filed on 12/05/2025, with respect to the pending claims, have been fully considered but are moot because the arguments the arguments rely on newly added and/or amended claim limitations. The examiner has revised the rejections to match the new claim limitations. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3 , 10-11, and 16 is/are rejected under 35 U.S.C. 102(a) as being unpatentable over Al-Rei et al. (Mona Al-Rei, "AUTOMATED 3D VISUALIZATION OF BRAIN CANCER"; MSc. Thesis; McMaster University - eHealth; June 1, 2017; XP055921553; pages i-xvii & 1-120, hereinafter “Al-Rei”) Regarding claim 1, Al-Rei teaches: An information processing system comprising a processor configured to function as a controller configured to execute each of following steps including: (4.1, "The computer script that was used to segment brain tumor form the MRI image and the reconstruction in 3D was written in Python Language because most image processing and manipulation techniques can be carried out effectively with Python and with the Skimage Library. Python is preferable more because of its speed in image processing. The Skimage provides a lot of functions that can help in brain tumor segmentation from the DICOM image also the skimage have a DICOM reader which will allow reading DICOM images which are later converted to numpy array.") a reading step of reading a plurality of sequential sectional images of an object; (4.1, "The first step in this program is to read the image that tried to be segmented with python. Therefore, each slice of MR image is read to python then picks the required image to the segment after that covert the selected image to numpy array. After the image has been read successfully, the next step to do is to smooth/de-noise the image.") a setting step of setting, based on a pixel value of a pixel in a predetermined area included in the plurality of sequential sectional images and preset reference information, (4.1.1., "The aim of this method is to extract the tumor and the surrounding edema from the rest of brain tissue by applying region growing algorithm in assistance of ConnectedThresholdImageFilter. This filter labels pixels that are connected to a seed and fall within a range of pixel values. This filter operates on the input image starting from a series of given ‗seed points.' After that, it starts ‗growing‘ a region around those points and keeps adding the neighbouring points of those which their values fall within given thresholds.") material information representing material of the object for the predetermined area, the preset reference information being information where the pixel value and the material are associated with each other; (4.1.1., "The main criteria for seed point selection is based on determining a seed point that most of neighbouring pixels falls in the same value of this specific seed point, in turn it will get into the largest area of segmented tumor. In this condition, this seed point which involves the large number of pixels considered as the best seed point for segmentation."; Examiner’s Note – The material information representing material of the object is broad and maps to anything that gets information from a material in any area. In the prior art tumor.) , the material being a parameter indicating a texture of the object, the material information relating to the parameter (2.3.1.1, “The pixels of the same region are labelled by the same symbol while the pixels of other regions are labelled by another symbol [53]. Through this approach it examines the neighbouring pixels of initial ―seed points‖ and decided whether the pixel neighbours should be added to that region or not. These pixels are called allocated pixels which lead to its growth while the rest of pixels called unallocated pixels. The process is iterated several times, in the same manner as general data clustering algorithms [50],[53]. The selected criteria could be, for example, gray level texture, color, or pixel intensity. Regions that are disjoint must be assigned because a single seed point" cannot”) and a reconstruction step of reconstructing the plurality of sequential sectional images including the predetermined area for which the material information is set and thereby generating three-dimensional data on the object. (4.2., "The marching cube algorithm has been applied to the segmented brain tumor. After the brain tumour has been detected and confirmed the extracted brain tumor will be passing to marching cube function as ―data‖ where the vertices and the faces will be generated to form the 3D. After the 3D has been formed it will be stored as a Stl (Stereo Lithography) file which contains the vertices and faces information.") Regarding claim 2, Al-Rei teaches: The information processing system according to claim 1, wherein: an image of the plurality of sequential sectional images is a medical image, and the object is predetermined tissue or organ of a human body. (4.1.1., "The aim of this method is to extract the tumor and the surrounding edema from the rest of brain tissue by applying region growing algorithm in assistance of ConnectedThresholdImageFilter. This filter labels pixels that are connected to a seed and fall within a range of pixel values. This filter operates on the input image starting from a series of given ‗seed points.' After that, it starts ‗growing‘ a region around those points and keeps adding the neighbouring points of those which their values fall within given thresholds.") Regarding claim 3, Al-Rei teaches: The information processing system according to claim 2, wherein in the setting step, the material information is set based further on a type of the medical image, and the preset reference information is information where the type of the medical image, the pixel value, and the material are associated with each other. (4.1.1., "The aim of this method is to extract the tumor and the surrounding edema from the rest of brain tissue by applying region growing algorithm in assistance of ConnectedThresholdImageFilter. This filter labels pixels that are connected to a seed and fall within a range of pixel values. This filter operates on the input image starting from a series of given ‗seed points.' After that, it starts ‗growing‘ a region around those points and keeps adding the neighbouring points of those which their values fall within given thresholds."; Examiner's Note - The type of medical image in the prior art are brain scans and their filters and material information derived are based on that type of medical image.) Regarding claim 10, Al-Rei teaches: The information processing system according to claim 1,wherein in the setting step, based on the pixel value of the pixel in a contour portion of the predetermined area and the preset reference information, the material information is set for the contour portion. (4.1.1., "The aim of this method is to extract the tumor and the surrounding edema from the rest of brain tissue by applying region growing algorithm in assistance of ConnectedThresholdImageFilter. This filter labels pixels that are connected to a seed and fall within a range of pixel values. This filter operates on the input image starting from a series of given ‗seed points.' After that, it starts ‗growing‘ a region around those points and keeps adding the neighbouring points of those which their values fall within given thresholds.") Regarding claim 11, Al-Rei teaches: The information processing system according to claim 1, wherein in the setting step, the preset reference information is selected from two or more pieces of the preset reference information depending on the predetermined area, and the material information is set based on the preset reference information. (4.1.1., "The aim of this method is to extract the tumor and the surrounding edema from the rest of brain tissue by applying region growing algorithm in assistance of ConnectedThresholdImageFilter. This filter labels pixels that are connected to a seed and fall within a range of pixel values. This filter operates on the input image starting from a series of given ‗seed points.' After that, it starts ‗growing‘ a region around those points and keeps adding the neighbouring points of those which their values fall within given thresholds.") Regarding claim 16, Al-Rei teaches: An information processing method comprising each step of the information processing system according to claim 1. (4.1, "The computer script that was used to segment brain tumor form the MRI image and the reconstruction in 3D was written in Python Language because most image processing and manipulation techniques can be carried out effectively with Python and with the Skimage Library. Python is preferable more because of its speed in image processing. The Skimage provides a lot of functions that can help in brain tumor segmentation from the DICOM image also the skimage have a DICOM reader which will allow reading DICOM images which are later converted to numpy array.") Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4-9 and 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Al-Rei et al. (Mona Al-Rei, "AUTOMATED 3D VISUALIZATION OF BRAIN CANCER"; MSc. Thesis; McMaster University - eHealth; June 1, 2017; XP055921553; pages i-xvii & 1-120, hereinafter “Al-Rei”) in view of Takanami et al. (U.S. Patent Publication No. 2011/0150312 -A1, hereinafter “Takanami”) Regarding claim 4, Al-Rei does not teach: However, Takanami does teach: The information processing system according to claim 2,wherein: the plurality of sequential sectional images include a first medical image captured by a first medical image diagnosis apparatus and a second medical image captured by a second medical image diagnosis apparatus; (Takanami, Fig. 2) In the setting step, the material information is set based on the first medical image; in the reconstruction step, the first medical image is reconstructed and first data including the material information on the object is generated; (Takanami, [0048], "The CT, MRI, and PET respectively form image sensors. These images give sensor images. The sensor images have sensor values called CT values, MRI values, or PET values. The intrinsic values for the material characteristics of the tissues are displayed as densities of color.") In the reconstruction step, the second medical image is reconstructed and second data including a shape of the object is generated; (Takanami, [0050], "To make use of the respective advantages, the tissue images obtained by an MRI image are rearranged on the shapes of contours etc. of a CT image. Note that, PET is one type of computed tomography and is used for cancer diagnosis etc. It becomes possible to generate a model of a cancerous organ using both an CT image and an MRI image.") and in the reconstruction step, the three-dimensional data is generated based on the first data and the second data. (Takanami, [0073], "The 3D data output unit 703 selects a plurality of images of extracted parts, which the image correction unit 702 prepared, in accordance with the nature of the surgery targeted by the simulation, combines them for each cross-sectional image of the live body, and stacks the obtained combined images for output (FIG. 8, process P804, FIG. 10). By combining and stacking only the organs required for simulation, 3D data of the model to be constructed is obtained (see FIG. 10).") At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify 3d medical image reconstruction (as taught by Al-Rei) to include multiple forms of sensors and sensor types in its data collection (as taught by Takanami) because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, 3d medical image reconstruction as modified by multiple forms of sensors and sensor types in its data collection can yield a predictable result of taking advantage of different sensor’s types respective advantages allowing for greater feature details. (Takanami, [0050], “An MRI image has a higher resolution of soft tissue, so has a greater amount of information than a CT image. Therefore, it enables a greater amount of physical values of tissue to be obtained compared with a CT image, but depending on the imaging device, distortion is sometimes caused at the time of capture. The shape is sometimes not accurately reflected. Each image differs in features. To make use of the respective advantages, the tissue images obtained by an MRI image are rearranged on the shapes of contours etc. of a CT image.”) Thus, a person of ordinary skill would have appreciated including in 3d medical image reconstruction the ability to do include multiple different sensors and sensor types since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 5, Al-Rei in view of Takanami teaches: The information processing system according to claim 4, wherein:the controller is configured to further execute a correction step of executing a correction process on at least one of the first data and the second data, (Takanami, [0071], "The image correction unit 702 designates an added part by for example red or a deleted part by for example green by a mouse based on the original image for an image which is combined by the image combining unit 701 and which is displayed on the first image display device 201 and outputs a corrected image reflecting the added and deleted information in the extracted image (FIG. 8, process P802). ") and in the reconstruction step, the three-dimensional data is generated based on the first data and the second data after the correction process. (Takanami, [0073], "The 3D data output unit 703 selects a plurality of images of extracted parts, which the image correction unit 702 prepared,") Regarding claim 6, Al-Rei in view of Takanami teaches: The information processing system according claim 4,wherein:the controller is configured to further execute an estimation step of estimating, based on the material information, shape information on a detailed shape of the object, and in the reconstruction step, the three-dimensional data is generated based on the shape information. (Takanami, [0073], "The 3D data output unit 703 selects a plurality of images of extracted parts, which the image correction unit 702 prepared, in accordance with the nature of the surgery targeted by the simulation, combines them for each cross-sectional image of the live body, and stacks the obtained combined images for output (FIG. 8, process P804, FIG. 10). By combining and stacking only the organs required for simulation, 3D data of the model to be constructed is obtained (see FIG. 10)."; Examiner's Note - Shape and contour are forms of material information.) Regarding claim 7, Al-Rei in view of Takanami teaches: The information processing system according to claim 4,wherein:the first medical image diagnosis apparatus is a magnetic resonance imaging apparatus, and the second medical image diagnosis apparatus is an X-ray computed tomography apparatus. (Takanami, [0050], "To make use of the respective advantages, the tissue images obtained by an MRI image are rearranged on the shapes of contours etc. of a CT image. Note that, PET is one type of computed tomography and is used for cancer diagnosis etc. It becomes possible to generate a model of a cancerous organ using both an CT image and an MRI image.") Regarding claim 8, Al-Rei in view of Takanami teaches: The information processing system according to claim 1,wherein in the reconstruction step, color viewed when the three- dimensional data is displayed is determined based on the material information, and the three-dimensional data is generated. (Takanami, [0048], "The CT, MRI, and PET respectively form image sensors. These images give sensor images. The sensor images have sensor values called CT values, MRI values, or PET values. The intrinsic values for the material characteristics of the tissues are displayed as densities of color."; [0050], "To make use of the respective advantages, the tissue images obtained by an MRI image are rearranged on the shapes of contours etc. of a CT image. Note that, PET is one type of computed tomography and is used for cancer diagnosis etc. It becomes possible to generate a model of a cancerous organ using both an CT image and an MRI image.") Regarding claim 9, Al-Rei in view of Takanami teaches: The information processing system according to claim 1,wherein in the preset reference information, one material is associated with one pixel value. (Takanami, [0048], "The CT, MRI, and PET respectively form image sensors. These images give sensor images. The sensor images have sensor values called CT values, MRI values, or PET values. The intrinsic values for the material characteristics of the tissues are displayed as densities of color.") Regarding claim 12, Al-Rei in view of Takanami teaches: The information processing system according to claim 1,wherein:the controller is configured to further execute a first receiving a step of receiving first operation input made by a user with respect to the plurality of sequential sectional images, the first operation input including information selecting one from two or more pieces of the preset reference information, and in the setting step, the material information is set based on the first operation input. (Takanami, Fig. 9; [0070], "The operator designates a predetermined range of slice images, uses the image combining unit 701 to combine slice images of the same positions in the height direction of the backbone of the body, that is, slice images obtained by the extracted image data which was output from the target point updating unit 505, and pre-extraction images before extraction obtained from the distortion-corrected image storage unit 206, that is, the contour-corrected slice images, by separate colors so as to enable the two to be visually differentiated (FIG. 8, process P801).") Regarding claim 13, Al-Rei in view of Takanami teaches: The information processing system according to claim 1,wherein the controller is configured to further execute a second receiving step of receiving second operation input made by a user with respect to the plurality of sequential sectional images, the second operation input including information specifying the predetermined area. (Takanami, Fig. 9; [0070], "The operator designates a predetermined range of slice images, uses the image combining unit 701 to combine slice images of the same positions in the height direction of the backbone of the body, that is, slice images obtained by the extracted image data which was output from the target point updating unit 505, and pre-extraction images before extraction obtained from the distortion-corrected image storage unit 206, that is, the contour-corrected slice images, by separate colors so as to enable the two to be visually differentiated (FIG. 8, process P801).") Regarding claim 14, Al-Rei in view of Takanami teaches: The information processing system according to claim 1,wherein:the controller is configured to further execute a third receiving step of receiving third operation input made by a user with respect to the plurality of sequential sectional images, the third operation input including information specifying color corresponding to the material information, and in the reconstruction step, the three-dimensional data is generated based on the material information and the third operation input. (Takanami, Fig. 9; [0070], "The operator designates a predetermined range of slice images, uses the image combining unit 701 to combine slice images of the same positions in the height direction of the backbone of the body, that is, slice images obtained by the extracted image data which was output from the target point updating unit 505, and pre-extraction images before extraction obtained from the distortion-corrected image storage unit 206, that is, the contour-corrected slice images, by separate colors so as to enable the two to be visually differentiated (FIG. 8, process P801).") Regarding claim 15, Al-Rei in view of Takanami teaches: A computer-readable non-transitory memory medium storing a program allowing a computer to execute each step of the information processing system according to claim 1. (Takanami,[0011], "The data storage device of a biodata model in the present invention is a device which stores data of a biodata model which uses medical image data to construct a 3D data model, which device stores data in a manner providing organs as parts forming a predetermined range of the body and simple geometric shapes surrounding the organs and using nodes defining positional relationships of the organs to construct a tree shape.") Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jinsu Hwang whose telephone number is (703)756-1370. The examiner can normally be reached Mon 6am-8am, 3pm-9pm EST; Thu 12pm - 2pm EST; Fri 12pm – 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JINSU HWANG/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Jul 27, 2023
Application Filed
Sep 04, 2025
Non-Final Rejection — §102, §103
Dec 05, 2025
Response Filed
Feb 21, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602828
MULTI-SENSOR SYSTEM WITH PICTURE-IN-PICTURE IMAGE OUTPUT
2y 5m to grant Granted Apr 14, 2026
Patent 12573177
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12536775
Time-series image description method for dam defects based on local self-attention
2y 5m to grant Granted Jan 27, 2026
Patent 12511713
IMAGE SUPER-RESOLUTION RECONSTRUCTING
2y 5m to grant Granted Dec 30, 2025
Patent 12511882
A CONTINUOUSLY, ADAPTIVE DETECTION COMPUTER-IMPLEMENTED METHOD OF ENVIRONMENT FEATURES IN AUTONOMOUS AND ASSISTED DRIVING OF AN EGO-VEHICLE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
85%
With Interview (+5.9%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month