Prosecution Insights
Last updated: April 19, 2026
Application No. 18/035,121

IMAGE RENDERING METHOD FOR TOMOGRAPHIC IMAGE DATA

Non-Final OA §103
Filed
May 03, 2023
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Koninklijke Philips N V
OA Round
5 (Non-Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 5th March, 2026 has been entered. Response to Amendment This action is in response to the amendment filed on 5th March, 2026. Claims 1, 6, 9, and 14 have been amended. Claims 2, 4-5, and 12-13 have been cancelled. Claims 1, 3, 6-11, and 14 remain rejected in the application. Response to Arguments Applicant's arguments with respect to Claims 1 and 14, filed on 5th March, 2026, with respect to the rejection under 35 U.S.C. § 103 regarding that the prior art does not teach "the subset of the pixels is selected based on identifying pixels which correspond spatially to a target anatomical structure and is further based on a pre-defined threshold for the pixel value of each pixel in the slice". The proposed amended claim limitations have been fully considered, but are not persuasive. In response to applicant's argument that the prior art does not teach "the subset of the pixels is selected based on identifying pixels which correspond spatially to a target anatomical structure and is further based on a pre-defined threshold for the pixel value of each pixel in the slice" as recited in Claim 1, these limitations are taught by Vining. In particular, Vining teaches the following: Paragraph [0046]: discloses selecting a targeted volume 14 <read on subset of pixels> from 3D volume 13 of an organ <read on target anatomical structure> or region of interest for 3D rendering, where the 3D volume 13 is formed from a series of 2D images that are stacked, which defines <read on identify pixels> a 3D matrix that represents a physical property associated with the 3D structure at coordinates positioned throughout the 3D volume; and Paragraph [0051]: discloses segmentation process 70 selecting volumes based on threshold ranges, where a "selected volume is thresholded by comparing each voxel <read on pixel value of each pixel> in the selected volume to the threshold limits and by assigning the appropriate color value 0 or 255 to each such voxel depending on whether each such voxel falls inside or outside the threshold range defined by the threshold limits <read on pre-defined threshold>"; voxels are 3D forms of pixels. Thus, applicant’s remark cannot be considered persuasive. Regarding arguments to Claims 3 and 6-11, they directly/indirectly depend on independent Claims 1 and 14 respectively. Applicant does not argue anything other than independent Claims 1 and 14. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 6-7, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kaufman et al. (US 20040125103 A1, previously cited), hereinafter referenced as Kaufman, in view of Vining (US 20100328305 A1, previously cited). Regarding Claim 1, Kaufman discloses a computer-implemented method for generating a medical image based on a tomographic imaging data of a 3D object (Kaufman, [0140]: teaches a method of image-based rendering of multiple volumetric and polygonal objects, such as organs), the method comprising: obtaining reconstructed tomographic image data for each slice of a plurality of slices of the 3D object (Kaufman, [0526]: teaches a method of reconstructing "the 3D shape of objects from photographic images <read on plurality of slices>"; [0535]: teaches "reconstructing a volumetric object from its back-projections <read on obtaining reconstructed tomographic image data of 3D object>" for computed tomography (CT) applications), the reconstructed tomographic image data for each slice comprising pixel values for the slice (Kaufman, [0177]: teaches calculating "the pixel value at exact grid points" of a slice), the reconstructed tomographic image data for the plurality of slices forming a 3D image dataset (Kaufman, FIG. 75 teaches a plurality of slices forming a cube dataset <read on 3D image dataset>); PNG media_image1.png 257 367 media_image1.png Greyscale [[selecting, for each slice, a subset of pixels in the slice to be rendered using pixel values obtained by at least one volume rendering of the 3D image dataset, wherein]] [[the subset of the pixels is selected based on identifying pixels which correspond spatially to a target anatomical structure and is further based on]] [[a pre-defined threshold for the pixel value of each pixel in the slice;]] [[performing, for each slice, a volume rendering of the selected subset of pixels of the slice by applying a volume rendering procedure to the 3D image dataset, wherein]] a plane defined by the slice within the 3D image dataset forms an imaging plane of the volume rendering (Kaufman, [0298]: teaches the enhanced volume rendering being capable of conventional clipping planes; [0299]: teaches axis-aligned cutting planes, where it restricts "the volume traversal to the cuboid of interest"; [0417]: teaches adjusting the cut-plane position for the volume dataset), and wherein [[the volume rendering is generated for only the selected subset of pixels in each slice;]] constructing, for each slice, a composite image of the slice by replacing the pixel values of the selected subset of pixels in the slice obtained from the reconstructed tomographic image data with the pixel values obtained from the volume rendering (Kaufman, [0132]: teaches "the multiplication of color and intensity yields a pixel color for each sample <read on slice> which is used in the compositing unit 60 to composite <read on constructing composite image> such color with the previously accumulated pixels along each sight ray"; [0133]: teaches "when compositing has been completed, the composited pixels (i.e., baseplane pixels) <read on replaced pixel values of selected subset of pixels in slice> are preferably stored in the corresponding 2D memory unit 40 connected to the Cube-5 unit pipeline 38"); and generating a data output representative of the constructed one or more composite images (Kaufman, [0122]: teaches "the imagery unit 16 preferably includes a plurality of imagery pipelines and the geometry unit 18 preferably includes a plurality of geometry pipelines (not shown) for rendering the imagery and geometry representations <read on data output>, respectively"). However, Kaufman does not expressly disclose selecting, for each slice, a subset of pixels in the slice to be rendered using pixel values obtained by at least one volume rendering of the 3D image dataset, wherein the subset of the pixels is selected based on identifying pixels which correspond spatially to a target anatomical structure and is further based on a pre-defined threshold for the pixel value of each pixel in the slice; performing, for each slice, a volume rendering of the selected subset of pixels of the slice by applying a volume rendering procedure to the 3D image dataset, wherein the volume rendering is generated for only the selected subset of pixels in each slice. Vining discloses selecting, for each slice, a subset of pixels in the slice to be rendered using pixel values obtained by at least one volume rendering of the 3D image dataset (Vining, [0046]: teaches a series of 2D images 12 <read on slice> being stacked to form a 3D volume 13 <read on 3D image dataset>, which defines a 3D matrix, where the 3D matrix is composed of voxels <read on pixel values>, which are analogous to 2D pixels as shown in FIG. 2; [0046]: further teaches selecting a targeted volume 14 <read on subset of pixels in slice> from 3D volume 13 of an organ or region of interest for 3D rendering), wherein PNG media_image2.png 329 159 media_image2.png Greyscale the subset of the pixels is selected based on identifying pixels which correspond spatially to a target anatomical structure (Vining, [0046]: teaches selecting a targeted volume 14 <read on subset of pixels> from 3D volume 13 of an organ <read on target anatomical structure> or region of interest for 3D rendering, where the 3D volume 13 is formed from a series of 2D images that are stacked, which defines <read on identify pixels> a 3D matrix that represents a physical property associated with the 3D structure at coordinates positioned throughout the 3D volume) and is further based on a pre-defined threshold for the pixel value of each pixel in the slice (Vining, [0051]: teaches segmentation process 70 selecting volumes based on threshold ranges, where a "selected volume is thresholded by comparing each voxel <read on pixel value of each pixel> in the selected volume to the threshold limits and by assigning the appropriate color value 0 or 255 to each such voxel depending on whether each such voxel falls inside or outside the threshold range defined by the threshold limits <read on pre-defined threshold>"; Note: it should be noted that voxels are 3D forms of pixels); performing, for each slice, a volume rendering of the selected subset of pixels of the slice by applying a volume rendering procedure to the 3D image dataset (Vining, [0064]: teaches performing 3D rendering <read on volume rendering> on a selected sub-volumes of the dataset, which enables separate 3D renderings of each selected sub-volume; Note: it should be noted that the process of separate 3D renderings of each selected sub-volume is being interpreted as applying a volume rendering procedure to the 3D image dataset), wherein the volume rendering is generated for only the selected subset of pixels in each slice (Vining, [0046]: further selecting a targeted volume 14 <read on subset of pixels> from 3D volume 13 of an organ or region of interest for 3D rendering <read on volume rendering>, which is constructed from stacked 2D images <read on slice>). Vining is analogous art with respect to Kaufman because they are from the same field of endeavor, namely volumetric rendering of 3D image data of CT scans. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to allow the user to select an anatomy or region of interest to generate a targeted volume based on modifiable pixel/voxel threshold limits as taught by Vining into the teaching of Kaufman. The suggestion for doing so would render only the selected regions of interest that is user-modifiable, allowing the user to determine which pixels/voxels are allowed to be shown, thereby saving on rendering performance and improving the overall user experience and usability. Therefore, it would have been obvious to combine Vining with Kaufman. Regarding Claim 14, it recites the limitations that are similar in scope to Claim 1, but in a system. As shown in the rejection, the combination of Kaufman and Vining discloses the limitations of Claim 1. Additionally, Kaufman discloses a system for generating a medical image based on a tomographic imaging data of a 3D object (Kaufman, [0128]: teaches system 10 containing a geometry pipeline/engine which supports "the integration of imagery, such as volumes and textures, with geometries, such as polygons and surfaces"), comprising: a processing arrangement for use in generating an image based on a tomographic imaging data of a 3D object (Kaufman, [0535]: teaches reconstructing a volumetric object from its back-projections), the processing arrangement configured to (Kaufman, [0152]: teaches the Cube-5 apparatus containing "enhanced features for real-time volume processing"):… Thus, Claim 14 is met by Kaufman according to the mapping presented in the rejection of Claim 1, given the computer-implemented method corresponds to a system. Regarding Claim 3, the combination of Kaufman and Vining discloses the computer-implemented method of Claim 1. Additionally, Kaufman further discloses wherein the volume rendering procedure is a volume ray-casting method (Kaufman, [0310]: teaches adapting "polygon rendering to slice order ray casting", which "synchronizes the overall rendering process on a volume slice-by-slice basis"), and wherein sampling rays are cast through the 3D image dataset orthogonally with respect to the plane of each slice (Kaufman, [0415]: teaches rendering slabs 410, which are "orthogonal to one of the three volume axes as shown in FIG. 47"). PNG media_image3.png 301 401 media_image3.png Greyscale Regarding Claim 6, the combination of Kaufman and Vining discloses the computer-implemented method of Claim 1. Additionally, Kaufman further discloses wherein the tomographic imaging data is x-ray computed tomography (CT) imaging data (Kaufman, [0535]: teaches projecting an image from the volume currently being reconstructed and comparing it to the x-ray image acquired from the scanner). Regarding Claim 7, the combination of Kaufman and Vining discloses the computer-implemented method of Claim 1. Additionally, Kaufman further discloses wherein the obtaining the reconstructed image data comprises receiving input tomographic projection data (Kaufman, [0535]: teaches reconstructing a volumetric object from its back-projections <read on input tomographic projection data>), and performing a tomographic image reconstruction for each slice in turn (Kaufman, [0542]: teaches the reconstruction volume process "was performed using a simple and efficient slice-based technique"). Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kaufman et al. (US 20040125103 A1, previously cited), hereinafter referenced as Kaufman, in view of Vining (US 20100328305 A1, previously cited) as applied to Claim 1 above respectively, and further in view of Wang et al. (US 20110282181 A1, previously cited), hereinafter referenced as Wang. Regarding Claim 8, the combination of Kaufman and Vining discloses the computer-implemented method of Claim 1. Additionally, Kaufman further discloses wherein [[the tomographic image data is spectral computed tomography (CT) image data, and wherein]] the reconstructed tomographic image data includes a plurality of different image reconstructions for each slice, forming a plurality of different 3D image datasets (Kaufman, [0535]: teaches reconstructing a volumetric object from its back-projections; [0535]: teaches the volume being reconstructed by a sequence of projections <read on different image reconstructions> and back-projections), and wherein [[each of the different image reconstructions is based on CT projection data corresponding to a different spectral channel.]] However, the combination of Kaufman and Vining does not expressly disclose the tomographic image data is spectral computed tomography (CT) image data, and wherein each of the different image reconstructions is based on CT projection data corresponding to a different spectral channel. Wang discloses the tomographic image data is spectral computed tomography (CT) image data (Wang, [0121]: teaches a spectral true-color micro-CT <read on spectral computed tomography (CT) image data>), and wherein each of the different image reconstructions is based on CT projection data corresponding to a different spectral channel (Wang, [0130]: teaches an 8-channel spectral micro-CT reconstruction; [0132]: teaches "a single-channel <read on different spectral channel> micro-CT scan measures a set of projection data"). Wang is analogous art with respect to Kaufman, in view of Vining because they are from the same field of endeavor, namely CT scans. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to determine the characteristics of a spectral CT image data as taught by Wang into the teaching of Kaufman, in view of Vining. The suggestion for doing so would allow users to identify the contrast material composition inside the target object, thereby enabling the possibility to track difficult to scan areas, such as blood vessels. Therefore, it would have been obvious to combine Wang with Kaufman, in view of Vining. Regarding Claim 9, the combination of Kaufman, Vining, and Wang discloses the computer-implemented method of Claim 8. The combination of Kaufman and Wang does not expressly disclose the limitations of Claim 9; however, Vining discloses wherein the imaged volume is representative of an anatomical region of subject (Vining, [0046]: teaches selecting a targeted volume 14 from 3D volume 13 <read on imaged volume> of an organ or region of interest <read on anatomical region of subject> for 3D rendering), wherein the region contains an administered contrast agent (Vining, [0090]: teaches administering a non-ionic intravenous bolus of iodinated contrast agent with a power injector to aid in distinguishing the blood vessels surrounding the tracheobronchial airway, where after an appropriate time delay, "the patient is scanned at step 45 from the thoracic inlet to the lung base to produce a series of two-dimensional images 12"), and wherein the tomographic imaging data for one of the spectral channels comprises pixel values indicative of a density of the contrast agent at the pixel location (Vining, [0094]: teaches water being assigned a value of 0 Hounsfield units (HU), soft tissue being assigned between 20 and 200 HU, and contrast enhanced blood being greater than 125 HU, where these values are used for displaying a grayscale of the organ region; Note: the HU values are being interpreted as a density of the contrast agent at a given pixel location; additionally, HU units are commonly used to determine material density; the administered contrast agent is used to distinguish blood vessels and therefore will have a different density to other materials, such as water). Vining is analogous art with respect to Kaufman, in view of Wang because they are from the same field of endeavor, namely volumetric rendering of 3D image data of CT scans. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to allow the user to select an anatomy or region of interest to generate a targeted volume based on modifiable pixel/voxel threshold limits as taught by Vining into the teaching of Kaufman, in view of Wang. The suggestion for doing so would render only the selected regions of interest that is user-modifiable, allowing the user to determine which pixels/voxels are allowed to be shown, thereby saving on rendering performance and improving the overall user experience and usability. Therefore, it would have been obvious to combine Vining with Kaufman, in view of Wang. Claim 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Kaufman et al. (US 20040125103 A1, previously cited), hereinafter referenced as Kaufman, in view of Vining (US 20100328305 A1, previously cited) as applied to Claim 1 above respectively, and further in view of McCarthy et al. (US 20180235559 A1, previously cited), hereinafter referenced as McCarthy. Regarding Claim 10, the combination of Kaufman and Vining discloses the computer-implemented method of Claim 1. The combination of Kaufman and Vining does not expressly disclose the limitations of Claim 10; however, McCarthy discloses a user input signal (McCarthy, [0021]: teaches the computer 36 receiving "commands and scanning parameters from a user <read on user input signal>, such as an operator, via a console 40 that includes a user interface device, such as a keyboard, mouse, voice-activated controller, touchscreen or any other suitable input apparatus"); and a first display output representative of either the tomographic reconstruction for one or more slices (McCarthy, [0021]: teaches "an associated display 42 allows a user, such as an operator, to observe the reconstructed image <read on first display output> and other data <read on one or more slices> from computer 36"), or a second display output representative of the composite image rendering of one or more slices (McCarthy, [0025]: teaches "the computer transmits the reconstructed images <read on one or more slices of the composite image> and/or the patient information to a display 42 <read on second display output> communicatively coupled to the computer 36 and/or the image reconstructor 34"), wherein the selection of the first or second display output is dependent upon the user input (McCarthy, [0026]: teaches "the display 42 allows the operator to evaluate the imaged anatomy" and "also allow the operator to select an ROI and/or request patient information, for example, via graphical user interface (GUI) for a subsequent scan or processing"). McCarthy is analogous art with respect to Kaufman, in view of Vining because they are from the same field of endeavor, namely CT scans. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to allow the user to utilize a GUI to switch between CT scans and patient information as taught by McCarthy into the teaching of Kaufman, in view of Vining. The suggestion for doing so would enable operators to quickly view and switch between relevant information without requiring a restart of the machine. Therefore, it would have been obvious to combine McCarthy with Kaufman, in view of Vining. Regarding Claim 11, the combination of Kaufman, Vining, and McCarthy discloses the computer-implemented method of Claim 10. The combination of Kaufman and Vining does not expressly disclose the limitations of Claim 11; however, McCarthy discloses wherein the method comprises selectively toggling the display output between the first and second display outputs (McCarthy, [0026]: teaches the display 42 allowing the operator to evaluate the imaged anatomy, select an ROI, and/or request patient information through a GUI <read on toggling between display outputs>), wherein the toggling is triggered by the user input signal (McCarthy, [0026]: teaches the user using a GUI). McCarthy is analogous art with respect to Kaufman, in view of Vining because they are from the same field of endeavor, namely CT scans. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to utilize a GUI to control the CT machine as taught by McCarthy into the teaching of Kaufman, in view of Vining. The suggestion for doing so would allow the operator to use the CT machine to scan and view patient information without the need for a secondary external device. Therefore, it would have been obvious to combine McCarthy with Kaufman, in view of Vining. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Crowe et al. (US 20200175756 A1) discloses converting static 2D image slices into 3D images for future referencing between the two image datasets; Engel (US 20060028469 A1) discloses shading large volumetric data sets using partial derivatives computed in screen-space; and Velevski et al. (US 20200043214 A1) discloses transmitting a series of 2D image slices of a 3D object/model between a server and user-terminal. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Dec 17, 2024
Non-Final Rejection — §103
Mar 28, 2025
Response Filed
Apr 01, 2025
Final Rejection — §103
Jun 13, 2025
Response after Non-Final Action
Jul 14, 2025
Request for Continued Examination
Jul 18, 2025
Response after Non-Final Action
Jul 28, 2025
Non-Final Rejection — §103
Nov 07, 2025
Response Filed
Dec 01, 2025
Final Rejection — §103
Feb 05, 2026
Response after Non-Final Action
Mar 05, 2026
Request for Continued Examination
Mar 10, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month