Prosecution Insights
Last updated: April 19, 2026
Application No. 18/136,737

System and Method for a Patch-Loaded Multi-Planar Reconstruction (MPR)

Non-Final OA §103
Filed
Apr 19, 2023
Examiner
MA, MICHELLE HAU
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Segmentron LLC
OA Round
3 (Non-Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
17 granted / 21 resolved
+19.0% vs TC avg
Strong +36% interview lift
Without
With
+36.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
84.2%
+44.2% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 26, 2025 has been entered. Response to Amendment The amendment filed December 26, 2025 has been entered. Claims 1-5 and 7-22 remain pending in the application. Applicant’s amendments to the Claims have overcome each and every objection previously set forth in the Final Office Action mailed August 1, 2025. Response to Arguments Applicant’s arguments, see Pages 6-11 of Remarks, filed December 26, 2025, with respect to the rejection(s) of claim(s) 1-5 and 7-22 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Shekhar et al. (Cine MPR: Interactive Multiplanar Reformatting of Four-Dimensional Cardiac Data Using Hardware-Accelerated Texture Mapping). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 8-10, 12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Pyo et al. (US 2004070584 A1) in view of Shekhar et al. (Cine MPR: Interactive Multiplanar Reformatting of Four-Dimensional Cardiac Data Using Hardware-Accelerated Texture Mapping), hereinafter Pyo and Shekhar respectively. Regarding claim 1, Pyo teaches a method for generating a Multi-planar Reconstruction (MPR) of a targeted region of a volumetric image (Paragraph 0011 – “a three-dimensional multi-planar image reconstruction method, which is to display a multi-planar image of a region of interest in a reference image”), comprising: receiving the volumetric image (Paragraph 0040, 0042 – “The input/storing section 100 externally receives volume data containing density values of a three-dimensional structure having a predefined characteristic… the reference image processor 210 processes the volume data stored in the input/storing section 100 to display the three-dimensional reference image from the volume data”; Note: there is a 3D image of the volume data); storing the volumetric data remotely (Paragraph 0040 – “The input/storing section 100 externally receives volume data containing density values of a three-dimensional structure having a predefined characteristic, and stores the received volume data for three-dimensional multi-planar image reconstruction”; Note: the volumetric data is stored); receiving a user request for a particular MPR view, including a user-selected plane and a targeted region of interest (Paragraph 0042, 0055 – “the reference image processor 210 processes the volume data stored in the input/storing section 100 to display the three-dimensional reference image from the volume data, and receives a region of interest entered by the user via the input section 400 in the form of straight line, curve, or free-formed curve data… it is checked in step 110 whether or not the user selects the basic MPR. If the basic MPR is chosen, the respective points of the straight line presenting a selected plane are sampled and arrange”; Note: the user indicates a region of interest and selects a plane for an MPR); computing volumetric coordinates for pixels on the requested view (Paragraph 0055-0056, 0061, 0064 – “it is checked in step 110 whether or not the user selects the basic MPR. If the basic MPR is chosen, the respective points of the straight line presenting a selected plane are sampled and arranged, in step 112. The sample points that are the basis in the generation of the corresponding MPR image, preferably the basic MPR image, are then stored, in step 114…The sample points are contained in a straight line (or curve) drawn (or selected) on the three-dimensional reference image by the user, and they become the points that constitute the one side (the left side or the lower base according to the direction of view) of the final MPR image. In the case of the basic MPR, the storage of the sample points is achieved by sampling the sample points at intervals of unit length from the straight line presenting the plane selected by the user…To generate the MPR image directly from the three-dimensional volume data, the two-dimensional sample points obtained in the above procedures are converted to three-dimensional sample points, in step 150. More specifically, the conversion of the two-dimensional sample points to three-dimensional ones involves multiplying the coordinate of each point by the inverse matrix of viewing matrix A…The value corresponding to the unit voxel is then obtained using the direction vectors starting from the respective sample points. Applying this procedure to all the sample points obtains the MPR image”; Note: points are sampled for the user selected region of interest and converted to volumetric (3D) coordinates); identifying stored volumetric data having recorded bounds intersecting the user-selected plane in the targeted region of interest (Paragraph 0043, 0044 – “The converter 220 extracts three-dimensional coordinates corresponding to the individual points constituting a line, a curve, or a free-formed curve on the reference image fed into the reference image processor 210 from the two-dimensional position data of the points…The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest”; Note: volume data corresponding to (intersecting) the user-selected plane and region of interest are identified); loading only the identified volumetric data from the remote storage (Paragraph 0044 – “The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: the identified volume image data is loaded); and generating the MPR, wherein at least one of the planes displays the user- requested targeted region of interest, using the loaded volumetric data (Paragraph 0015, 0044 – “when the shape of the displayed section is in a basic multi-planar image mode, sampling sample points at intervals of unit length from a straight line representing a plane selected by the user…The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: an MPR is generated for the region of interest using the loaded volumetric data). Pyo does not teach parsing the volumetric image into a plurality of non-overlapping patches. Thus, Pyo also does not teach the “patches” in the limitations: “storing the patches remotely with recordation of three-dimensional (3-D) spatial coordinates, defining bounds of the patch”; “computing volumetric coordinates for pixels on the requested view, based on the recorded 3-D spatial coordinates of the stored patches”; “identifying stored patches having recorded bounds intersecting the user- selected plane in the targeted region of interest”; “loading only the identified patches from the remote storage, excluding patches previously loaded for the user-selected plane”; “and generating the MPR, wherein at least one of the planes displays the user- requested targeted region of interest, using the loaded patches”. However, Shekhar teaches parsing the volumetric image into a plurality of non-overlapping patches (Fig. 1, Paragraph 5 in 1st Col. of Page 2, Paragraph 1 in 2nd Col. of Page 2 – “Volume subdivision divides a 3-D image into smaller 3-D bricks of equal size. For spatiotemporal 4-D images, each frame is subdivided individually and identically”; Note: the volumetric image is parsed/divided into bricks, which are equivalent to patches. Fig. 1 shows the non-overlapping patches/bricks; see screenshot of Fig. 1 below); storing the patches remotely with recordation of three-dimensional (3-D) spatial coordinates, defining bounds of the patch (Paragraph 3-5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… When a user inter acts and changes the orientation of the reformatted plane(s), the calculation of intersected bricks, polygons vertices, and texture coordinates is repeated… All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick. The texture memory thus acts like a cache, whose initialization and updating directly affects the performance”; Note: the bricks/patches are stored. It would be obvious to one of ordinary skill in the art that the bricks are stored along with their coordinates because location is an integral part of defining the bricks and retrieving them efficiently later on. The polygon vertices and brick edge coordinates define the bounds); computing volumetric coordinates for pixels on the requested view, based on the recorded 3-D spatial coordinates of the stored patches (Paragraph 3, 5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence”; Note: coordinates are computed for the reformatted plane based on the location (coordinates) of the stored bricks); identifying stored patches having recorded bounds intersecting the user- selected plane in the targeted region of interest (Fig. 1, Paragraph 3-5 in 2nd Col. of Page 2 – “the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… The bricks needed for each reformatted plane are determined individually. Likewise, the calculation of polygon vertices and texture coordinates is repeated for each view…All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: the bricks/patches that intersect with the cutting plane, which corresponds to the user-selected plane taught by Pyo, are identified. The intersection is shown in Fig. 1); loading only the identified patches from the remote storage (Paragraph 5 in 2nd Col. of Page 2 – “All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: the needed/identified bricks are loaded from storage), excluding patches previously loaded for the user-selected plane (Paragraph 3 in 2nd Col. of Page 2, Paragraph 2 in 2nd Col. of Page 3 – “As long as the orientation stays fixed, the spatial arrangement of the required bricks within a frame does not change and the calculation of each brick does not need to be repeated. The previously calculated polygon vertices and texture coordinates are also reused… Our policy is to generate the list of needed bricks as before; however, we forgo common brick determination during interaction”; Note: bricks that were already loaded are excluded from loading); and generating the MPR, wherein at least one of the planes displays the user- requested targeted region of interest, using the loaded patches (Paragraph 2 in 2nd Col. of Page 2, Paragraph 5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons. Fig. 2 shows a reformatted plane in which each polygon is colored differently to illustrate the underlying mosaic and the seamless tiling of the polygons. For cine MPR, the rendering of the reformatted plane is repeated for each frame of the sequence…All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: an MPR is generated using the loaded bricks/patches). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to parse the volumetric image into patches because “the entire volume is rarely required for most visualization tasks. This is especially true for the cine MPR, in which only those voxels either immediately in front of or behind the cutting plane are needed. Volume subdivision provides the “granularity” to reject unnecessary data, thus lowering the data requirement (see Fig. 1) and consequently improving the performance” (Shekhar: Paragraph 2 in 2nd Col. of Page 2). It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to store the patches with 3D coordinates and define bounds of the patch for the benefit of being able to efficiently identify and retrieve patches for future use. Finally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to exclude patches previously loaded for the user-selected plane because “uploading the needed bricks (common or single-use) for the entire sequence is time-consuming and damaging to maintaining the necessary frame rate” (Shekhar: Paragraph 1 in 2nd Col. of Page 2). Therefore, not having to load the same patches again reduces time and power consumption and increases loading efficiency. Additionally, in general, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the stored volumetric data of Pyo could have been substituted for the stored patches/bricks of Shekhar because both the stored volumetric data and stored patches serve the purpose of being retrieved to generate an MPR. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of being loaded from storage for an MPR. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the stored volumetric data of Pyo for the stored patches of Shekhar according to known methods to yield the predictable result of being loaded from storage for an MPR. PNG media_image1.png 383 630 media_image1.png Greyscale Screenshot of Fig. 1 (taken from Shekhar) Regarding claim 4, Pyo in view of Shekhar teaches the method of claim 1. Pyo further teaches wherein the loading is performed by first computing the volumetric coordinate of each pixel on view to be rendered (Paragraph 0055-0056, 0061, 0064 – “it is checked in step 110 whether or not the user selects the basic MPR. If the basic MPR is chosen, the respective points of the straight line presenting a selected plane are sampled and arranged, in step 112. The sample points that are the basis in the generation of the corresponding MPR image, preferably the basic MPR image, are then stored, in step 114…The sample points are contained in a straight line (or curve) drawn (or selected) on the three-dimensional reference image by the user, and they become the points that constitute the one side (the left side or the lower base according to the direction of view) of the final MPR image. In the case of the basic MPR, the storage of the sample points is achieved by sampling the sample points at intervals of unit length from the straight line presenting the plane selected by the user…To generate the MPR image directly from the three-dimensional volume data, the two-dimensional sample points obtained in the above procedures are converted to three-dimensional sample points, in step 150. More specifically, the conversion of the two-dimensional sample points to three-dimensional ones involves multiplying the coordinate of each point by the inverse matrix of viewing matrix A…The value corresponding to the unit voxel is then obtained using the direction vectors starting from the respective sample points. Applying this procedure to all the sample points obtains the MPR image”; Note: points are sampled for the user selected region of interest and converted to volumetric (3D) coordinates) then determining which volumetric data are overlapping with the plane of the view (Paragraph 0043, 0044 – “The converter 220 extracts three-dimensional coordinates corresponding to the individual points constituting a line, a curve, or a free-formed curve on the reference image fed into the reference image processor 210 from the two-dimensional position data of the points…The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest”; Note: volume data corresponding to (overlapping) the user-selected plane and region of interest are identified), then requesting those volumetric data from the remote storage (Paragraph 0044 – “The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: the identified volume image data is loaded from storage). Pyo does not teach the “patches” from the limitation: “determining which patches are overlapping with the plane of the view, then requesting those patches from the remote storage”. However, Shekhar teaches determining which patches are overlapping with the plane of the view (Fig. 1, Paragraph 3-5 in 2nd Col. of Page 2 – “the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… The bricks needed for each reformatted plane are determined individually. Likewise, the calculation of polygon vertices and texture coordinates is repeated for each view…All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: the bricks/patches that overlap with the cutting plane, which corresponds to the user-selected plane taught by Pyo, are identified. The overlap is shown in Fig. 1; see screenshot of Fig. 1 above), then requesting those patches from the remote storage (Paragraph 5 in 2nd Col. of Page 2 – “All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: the needed/identified bricks are loaded from storage). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to have patches because “the entire volume is rarely required for most visualization tasks. This is especially true for the cine MPR, in which only those voxels either immediately in front of or behind the cutting plane are needed. Volume subdivision provides the “granularity” to reject unnecessary data, thus lowering the data requirement (see Fig. 1) and consequently improving the performance” (Shekhar: Paragraph 2 in 2nd Col. of Page 2). Additionally, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the stored volumetric data of Pyo could have been substituted for the stored patches/bricks of Shekhar because both the stored volumetric data and stored patches serve the purpose of being retrieved to generate an MPR. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of being loaded from storage for an MPR. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the stored volumetric data of Pyo for the stored patches of Shekhar according to known methods to yield the predictable result of being loaded from storage for an MPR. Regarding claim 8, Pyo in view of Shekhar teaches the method of claim 1. Pyo further teaches overlaying additional imaging data, annotation, and/or measurements onto the MPR (Paragraph 0046 – “The input section 400 provides different drawing tools for the user to designate a region of interest on the corresponding reference image displayed, preferably on the three-dimensional image. Namely, the input section 400 sends a drawing request signal to the multi-planar image reconstructor 200 in response to the user's drawing request from a mouse or the like”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have extended the capability of Pyo of drawing on a 3D image to be able to draw on the MPR because an MPR is a type of 3D image so it would have been possible to do by a person of ordinary skill in the art and it would yield the same results of annotating an image. Regarding claim 9, Pyo teaches a system for generating a patch-loaded Multi-Planar Reconstruction (MPR) (Paragraph 0010 – “a three-dimensional multi-planar image reconstruction system”), comprising: a processor (Paragraph 0009, Paragraph 0041 – “a recording medium readable by a computer storing the three-dimensional multi-planar image reconstruction method…The multi-planar image reconstructor 200, which comprises a reference image processor 210, a converter 220, and a reconstructor 230”; Note: it is implied that there is a processor in the computer order to perform the method); a memory storing instructions that, when executed by the processor (Paragraph 0009 – “a recording medium readable by a computer storing the three-dimensional multi-planar image reconstruction method”; Note: the recording medium is equivalent to memory), cause the processor to: receive a user request for a particular MPR view, including a user-selected plane and a targeted region of interest (Paragraph 0042, 0055 – “the reference image processor 210 processes the volume data stored in the input/storing section 100 to display the three-dimensional reference image from the volume data, and receives a region of interest entered by the user via the input section 400 in the form of straight line, curve, or free-formed curve data… it is checked in step 110 whether or not the user selects the basic MPR. If the basic MPR is chosen, the respective points of the straight line presenting a selected plane are sampled and arrange”; Note: the user indicates a region of interest and selects a plane for an MPR); compute volumetric coordinates for pixels on the requested view (Paragraph 0055-0056, 0061, 0064 – “it is checked in step 110 whether or not the user selects the basic MPR. If the basic MPR is chosen, the respective points of the straight line presenting a selected plane are sampled and arranged, in step 112. The sample points that are the basis in the generation of the corresponding MPR image, preferably the basic MPR image, are then stored, in step 114…The sample points are contained in a straight line (or curve) drawn (or selected) on the three-dimensional reference image by the user, and they become the points that constitute the one side (the left side or the lower base according to the direction of view) of the final MPR image. In the case of the basic MPR, the storage of the sample points is achieved by sampling the sample points at intervals of unit length from the straight line presenting the plane selected by the user…To generate the MPR image directly from the three-dimensional volume data, the two-dimensional sample points obtained in the above procedures are converted to three-dimensional sample points, in step 150. More specifically, the conversion of the two-dimensional sample points to three-dimensional ones involves multiplying the coordinate of each point by the inverse matrix of viewing matrix A…The value corresponding to the unit voxel is then obtained using the direction vectors starting from the respective sample points. Applying this procedure to all the sample points obtains the MPR image”; Note: points are sampled for the user selected region of interest and converted to volumetric (3D) coordinates); identify stored volumetric data having recorded bounds intersecting the user-selected plane in the targeted region of interest (Paragraph 0043, 0044 – “The converter 220 extracts three-dimensional coordinates corresponding to the individual points constituting a line, a curve, or a free-formed curve on the reference image fed into the reference image processor 210 from the two-dimensional position data of the points…The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest”; Note: volume data corresponding to (intersecting) the user-selected plane and region of interest are identified); load only the identified volumetric data from remote storage (Paragraph 0044 – “The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: the identified volume image data is loaded); and generate the MPR for display (Paragraph 0015, 0044 – “when the shape of the displayed section is in a basic multi-planar image mode, sampling sample points at intervals of unit length from a straight line representing a plane selected by the user…The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: an MPR is generated for the region of interest using the loaded volumetric data). Pyo does not teach parsing a volumetric image into a plurality of non-overlapping patches with recordation of three-dimensional coordinates for storage. Thus, Pyo also does not teach the “patches” in the limitations: “compute volumetric coordinates for pixel on the requested view, based on the recorded 3-D spatial coordinates of the stored patches”; “identify stored patches having recorded bounds intersecting the user-selected plane in the targeted region of interest”; “load only the identified patches from remote storage, excluding patches previously loaded for the user-selected plane”; “and generate the patch-loaded Multi-Planar Reconstruction (MPR) for display”. However, Shekhar teaches parsing the volumetric image into a plurality of non-overlapping patches (Fig. 1, Paragraph 5 in 1st Col. of Page 2, Paragraph 1 in 2nd Col. of Page 2 – “Volume subdivision divides a 3-D image into smaller 3-D bricks of equal size. For spatiotemporal 4-D images, each frame is subdivided individually and identically”; Note: the volumetric image is parsed/divided into bricks, which are equivalent to patches. Fig. 1 shows the non-overlapping patches/bricks; see screenshot of Fig. 1 above) with recordation of three-dimensional (3-D) spatial coordinates for storage (Paragraph 3-5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… When a user inter acts and changes the orientation of the reformatted plane(s), the calculation of intersected bricks, polygons vertices, and texture coordinates is repeated… All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick. The texture memory thus acts like a cache, whose initialization and updating directly affects the performance”; Note: the bricks/patches are stored. It would be obvious to one of ordinary skill in the art that the bricks are stored along with their coordinates because location is an integral part of defining the bricks and retrieving them efficiently later on. The polygon vertices and brick edge coordinates define the bounds); computing volumetric coordinates for pixels on the requested view, based on the recorded 3-D spatial coordinates of the stored patches (Paragraph 3, 5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence”; Note: coordinates are computed for the reformatted plane based on the location (coordinates) of the stored bricks); identifying stored patches having recorded bounds intersecting the user- selected plane in the targeted region of interest (Fig. 1, Paragraph 3-5 in 2nd Col. of Page 2 – “the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… The bricks needed for each reformatted plane are determined individually. Likewise, the calculation of polygon vertices and texture coordinates is repeated for each view…All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: the bricks/patches that intersect with the cutting plane, which corresponds to the user-selected plane taught by Pyo, are identified. The intersection is shown in Fig. 1); loading only the identified patches from the remote storage (Paragraph 5 in 2nd Col. of Page 2 – “All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick”; Note: the needed/identified bricks are loaded from storage), excluding patches previously loaded for the user-selected plane (Paragraph 3 in 2nd Col. of Page 2, Paragraph 2 in 2nd Col. of Page 3 – “As long as the orientation stays fixed, the spatial arrangement of the required bricks within a frame does not change and the calculation of each brick does not need to be repeated. The previously calculated polygon vertices and texture coordinates are also reused… Our policy is to generate the list of needed bricks as before; however, we forgo common brick determination during interaction”; Note: bricks that were already loaded are excluded from loading); and generating the patch-loaded Multi-Planar Reconstruction (MPR) for display (Paragraph 2 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons. Fig. 2 shows a reformatted plane in which each polygon is colored differently to illustrate the underlying mosaic and the seamless tiling of the polygons. For cine MPR, the rendering of the reformatted plane is repeated for each frame of the sequence”; Note: an MPR is generated using the mosaic of polygons (patch-loading)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to parse the volumetric image into patches because “the entire volume is rarely required for most visualization tasks. This is especially true for the cine MPR, in which only those voxels either immediately in front of or behind the cutting plane are needed. Volume subdivision provides the “granularity” to reject unnecessary data, thus lowering the data requirement (see Fig. 1) and consequently improving the performance” (Shekhar: Paragraph 2 in 2nd Col. of Page 2). It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to store the patches with 3D coordinates and define bounds of the patch for the benefit of being able to efficiently identify and retrieve patches for future use. Finally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to exclude patches previously loaded for the user-selected plane because “uploading the needed bricks (common or single-use) for the entire sequence is time-consuming and damaging to maintaining the necessary frame rate” (Shekhar: Paragraph 1 in 2nd Col. of Page 2). Therefore, not having to load the same patches again reduces time and power consumption and increases loading efficiency. Additionally, in general, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the stored volumetric data of Pyo could have been substituted for the stored patches/bricks of Shekhar because both the stored volumetric data and stored patches serve the purpose of being retrieved to generate an MPR. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of being loaded from storage for an MPR. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the stored volumetric data of Pyo for the stored patches of Shekhar according to known methods to yield the predictable result of being loaded from storage for an MPR. Regarding claim 10, Pyo in view of Shekhar teaches the system of claim 9. Pyo further teaches a user interface for selecting the targeted region of interest and plane for display (Paragraph 0045-0046 – “The display 300 displays the corresponding reference image…and the three-dimensional multi-planar image corresponding to the region of interest designated by the user…the input section 400 sends a drawing request signal to the multi-planar image reconstructor 200 in response to the user's drawing request from a mouse or the like.”; Note: the display, the input section, and the mouse together act as a user interface). Regarding claim 12, Pyo in view of Shekhar teaches the system of claim 9. Pyo further teaches the processor overlays at least one of an imaging data, annotations, or measurements onto the MPR (Paragraph 0046 – “The input section 400 provides different drawing tools for the user to designate a region of interest on the corresponding reference image displayed, preferably on the three-dimensional image. Namely, the input section 400 sends a drawing request signal to the multi-planar image reconstructor 200 in response to the user's drawing request from a mouse or the like”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have extended the capability of Pyo of drawing on a 3D image to be able to draw on the MPR because an MPR is a type of 3D image so it would have been possible to do by a person of ordinary skill in the art and it would yield the same results of annotating an image. Regarding claim 14, Pyo teaches a method for generating a patch-loaded Multi-Planar Reconstruction (MPR) (Paragraph 0011 – “a three-dimensional multi-planar image reconstruction method, which is to display a multi-planar image of a region of interest in a reference image”), said method comprising the steps of: loading stored volumetric data corresponding to a targeted region of interest (Paragraph 0044 – “The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: the identified volume image data is loaded) requested by a user for display in at least one of a plane (Paragraph 0042, 0055 – “the reference image processor 210 processes the volume data stored in the input/storing section 100 to display the three-dimensional reference image from the volume data, and receives a region of interest entered by the user via the input section 400 in the form of straight line, curve, or free-formed curve data… it is checked in step 110 whether or not the user selects the basic MPR. If the basic MPR is chosen, the respective points of the straight line presenting a selected plane are sampled and arrange”; Note: the user indicates a region of interest and selects a plane for an MPR) of the generated MPR (Paragraph 0015, 0044 – “when the shape of the displayed section is in a basic multi-planar image mode, sampling sample points at intervals of unit length from a straight line representing a plane selected by the user…The reconstructor 230 acquires image information from the three-dimensional image using the three-dimensional coordinates corresponding to the individual points received from the converter 220 and the viewing vector of a multi-planar image of interest, and reconstructs the image information into a three-dimensional multi-planar image corresponding to a region of interest designated by the user from the volume data”; Note: an MPR is generated for the region of interest using the loaded volumetric data). Pyo does not teach parsing a volumetric image into a plurality of non-overlapping patches with recordation of 3-D coordinates for storage, nor the “stored patch” in the limitation: “loading the stored patch corresponding to a targeted region of interest requested by a user for display in at least one of a plane of the generated load-patched MPR”. However, Shekhar teaches parsing a volumetric image into a plurality of non-overlapping patches (Fig. 1, Paragraph 5 in 1st Col. of Page 2, Paragraph 1 in 2nd Col. of Page 2 – “Volume subdivision divides a 3-D image into smaller 3-D bricks of equal size. For spatiotemporal 4-D images, each frame is subdivided individually and identically”; Note: the volumetric image is parsed/divided into bricks, which are equivalent to patches. Fig. 1 shows the non-overlapping patches/bricks; see screenshot of Fig. 1 above) with recordation of 3-D coordinates for storage (Paragraph 3-5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… When a user inter acts and changes the orientation of the reformatted plane(s), the calculation of intersected bricks, polygons vertices, and texture coordinates is repeated… All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence. If the brick does not exist, it is copied to the texture memory, overwriting an existing brick. The texture memory thus acts like a cache, whose initialization and updating directly affects the performance”; Note: the bricks/patches are stored. It would be obvious to one of ordinary skill in the art that the bricks are stored along with their coordinates because location is an integral part of defining the bricks and retrieving them efficiently later on) and loading the stored patch corresponding to a targeted region of interest requested by a user for display in at least one of a plane of the generated load-patched MPR (Paragraph 3, 5 in 2nd Col. of Page 2 – “The MPR of a 3-D image is the intersection of a cutting plane, coinciding with the reformatted plane, with the image volume. In our case, the intersection of the cutting plane with the subdivided volume yields a mosaic of polygons. Following the calculation of polygon vertices and texture coordinates, the reformatted plane is rendered by texture mapping each of the polygons… All bricks needed for cine MPR must exist in the texture memory before they can be used. Since the brick size is significantly smaller than the size of the texture memory, thousands of bricks can reside in the texture memory simultaneously. When a specific brick is needed, the texture memory is checked first for its existence”; Note: the needed/identified bricks are loaded from storage and are rendered for display of the MPR). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to parse the volumetric image into patches because “the entire volume is rarely required for most visualization tasks. This is especially true for the cine MPR, in which only those voxels either immediately in front of or behind the cutting plane are needed. Volume subdivision provides the “granularity” to reject unnecessary data, thus lowering the data requirement (see Fig. 1) and consequently improving the performance” (Shekhar: Paragraph 2 in 2nd Col. of Page 2). It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Shekhar to store the patches with 3D coordinates for the benefit of being able to efficiently identify and retrieve patches for future use. Additionally, in general, a person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the stored volumetric data of Pyo could have been substituted for the stored patches/bricks of Shekhar because both the stored volumetric data and stored patches serve the purpose of being retrieved to generate an MPR. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of being loaded from storage for an MPR. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the stored volumetric data of Pyo for the stored patches of Shekhar according to known methods to yield the predictable result of being loaded from storage for an MPR. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar and Chen et al. (US 8849016 B2), hereinafter Chen. Regarding claim 2, Pyo in view of Shekhar teaches the method of claim 1. Pyo does not teach wherein the volumetric image is obtained from at least a cone-beam computed tomography (CBCT) scan. However, Chen teaches wherein the volumetric image is obtained from at least a cone-beam computed tomography (CBCT) scan (Col. 4 lines 18-20 – “the CBCT volume…is acquired in an image data acquisition step 102”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Chen to have a CBCT scan for the benefit of expanding MPR to dentistry, making it easier to examine teeth and make diagnoses. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar and Yanof et al. (US 5734384 A), hereinafter Yanof. Regarding claim 3, Pyo in view of Shekhar teaches the method of claim 1. Pyo further teaches wherein the MPR comprises at least one of an axial view, coronal view, sagittal view, or oblique view (Paragraph 0003 – “The 3-dimensional multi-planar image reconstruction system uses a coronal, sagittal, or axial image on the vertical plane of the whole volume as the reference image, and provides vertical, horizontal, and oblique lines as the presentation interfaces of the reconstructed image”). Pyo does not teach wherein the coronal view is considered an oblique view with a 0-degree rotation angle about the Z axis, and the sagittal view is considered an oblique view with a 90-degree rotation angle about the Z axis. However, Yanof teaches wherein the coronal view is considered an oblique view with a 0-degree rotation angle about the Z axis (Col. 2 lines 5-7 – “The operator could then position the cursor on the (x,y) or transverse plane to select a coronal or (x,z) plane”; Note: A coronal plane is the equivalent to an (x,z) plane so there is a 0 degree rotation angle about the z-axis), and the sagittal view is considered an oblique view with a 90-degree rotation angle about the Z axis (Col. 2 lines 7-13 – “The operator would then position the cursor on the displayed coronal plane to select a sagittal or (y,z) plane”; Note: A sagittal plane is equivalent to a (y,z) plane so there is a 90 degree rotation angle about the z-axis). It is common knowledge in the art that a coronal plane is a (x,z) plane and that a sagittal plane is a (y,z) plane. Therefore, it is also common knowledge that a coronal plane would have a 0-degree rotation angle about the z-axis and a sagittal plane would have a 90-degree rotation angle about the z-axis. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, Engel et al. (US 20070229500 A1), and Smith-Casem et al. (US 20130328874 A1), hereinafter Engel and Smith-Casem respectively. Regarding claim 5, Pyo in view of Shekhar teaches the method of claim 1. Pyo does not teach wherein the MPR plane is generated by computing the color value of each planar pixel by interpolating the color of the pixel using any kind of interpolation from those voxel color values. However, Engel teaches wherein the MPR plane is generated by computing the color value of each planar pixel (Paragraph 0036 – “the final output color for a pixel on the screen in computed”). Engel also teaches interpolating the color of the pixel using any kind of interpolation from those voxel color values (Paragraph 0046 – “Tri-linear interpolation is performed on the pixels neighboring the sampling point at step 303 to obtain an initial value to be associated with the sampling point, and a transfer function that maps the data value to a color and/or opacity is applied to this value at step 304 to adjust the color and brightness of the MPR plane”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Engel to compute and interpolate color for the benefit of making the visualization easier to view by reducing occlusion and smoothening the scene (Engel: Paragraph 0036, 0044). Furthermore, Pyo modified by Engel does not teach first selecting the multitude of voxels contained on the multitude of loaded patches that are neighboring the pixel's volumetric location. However, Smith-Casem teaches first selecting the multitude of voxels contained on the multitude of loaded patches that are neighboring the pixel's volumetric location (Paragraph 0035 – “For the locations on the plane (e.g., pixel locations), the data from the nearest location in the volume grid (e.g., voxel) is selected”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Smith-Casem to select voxels on the patches based on the pixel’s location for the benefit of generating a reconstruction that is the closest to the input data, making it more accurate and realistic. Claims 7 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar and RadiAnt DICOM Viewer (https://www.radiantviewer.com/dicom-viewer-manual/3dmpr-window-adjustment.html), hereinafter RadiAnt. Regarding claim 7, Pyo in view of Shekhar teaches the method of claim 1. Pyo does not teach adjusting the brightness and contrast of the MPR. However, RadiAnt teaches adjusting the brightness and contrast of the MPR (see modified screenshot 1 below). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of RadiAnt to adjust brightness and contrast for the benefit of giving the user more customization options to allow them to view the MPR with more or less brightness and contrast and thus, providing an overall better viewing experience for the user. PNG media_image2.png 325 976 media_image2.png Greyscale Modified screenshot 1 (taken from https://www.radiantviewer.com/dicom-viewer-manual/3dmpr-window-adjustment.html) Regarding claim 11, Pyo in view of Shekhar teaches the system of claim 9. Pyo does not teach wherein the processor adjusts at least one of a brightness, or contrast, of the target region of interest. However, RadiAnt teaches adjusting at least one of a brightness, or contrast, of the target region of interest (See modified screenshot 1 above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of RadiAnt to adjust brightness and contrast for the benefit of giving the user more customization options to allow them to view the MPR with more or less brightness, contrast, and intensity. Thus, it would provide an overall better viewing experience for the user. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, RadiAnt DICOM Viewer 2 (https://www.radiantviewer.com/dicom-viewer-manual/3dmpr-changing-zoom-level-and-position.html), and Chaudhary et al. (The technique of using three-dimensional and multiplanar reformatted computed tomography for preoperative planning in pediatric craniovertebral anomalies), hereinafter RadiAnt 2 and Chaudhary respectively. Regarding claim 13, Pyo in view of Shekhar teaches the system of claim 9. Pyo does not teach wherein the processor modifies a view, including crop and zoom in on the target region of interest. However, RadiAnt 2 teaches modifying a view, including zooming in on the target region of interest (See screenshot 3 below). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of RadiAnt 2 to allow zooming for the benefit of giving the user more customization options to allow them to view the target region more or less closely and thus, providing an overall better viewing experience for the user. Furthermore, Pyo modified by RadiAnt 2 does not teach cropping the target region of interest. However, Chaudhary teaches cropping the target region of interest (Page 2 – “The 3D image was cropped using the ‘scissor’ tool such that only the desired anatomy was visible”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Chaudhary to allow cropping for the benefit of giving the user more customization options to allow them to only view the region of interest or remove unwanted areas (Chaudhary: Page 11) and thus, providing an overall better viewing experience for the user. PNG media_image3.png 347 894 media_image3.png Greyscale Screenshot 3 (taken from https://www.radiantviewer.com/dicom-viewer-manual/3dmpr-changing-zoom-level-and-position.html) Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, Padmanabhan (US 20130036193 A1), and Lai et al. (Improving Web Browsing on Wireless PDAs Using Thin-Client Computing), hereinafter Padmanabhan and Lai respectively. Regarding claim 15, Pyo in view of Shekhar teaches the method of claim 14. Pyo does not teach wherein the loading is performed by sending HTTP requests to load patches stored as individual resources identified by URI into thin-client browser. However, Padmanabhan teaches wherein the loading is performed by sending HTTP requests to load patches stored as individual resources identified by URI (Paragraph 0046-0047 – “the dynamic image sprite is transmitted to the client device to enable the web browser to efficiently render the web page…generating the dynamic web sprite may include the web service generating Hypertext Transfer Protocol (HTTP) requests to at least one image hosting server to retrieve a plurality of images corresponding to the plurality of image URIs”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Padmanabhan for the benefit of enabling “the web browser to efficiently render the web page by reducing the number of HTTP requests issued by the web browser to retrieve images” (Padmanabhan: Paragraph 0046). Furthermore, Pyo modified by Padmanabhan does not teach a thin-client browser. However, Lai teaches a thin-client browser (Fig. 2 – See screenshot 4 below). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Lai because “thin clients provide better web browsing performance than fat clients across a wide variety of web content, including general consumer content, medical imaging content, and text-based clinical information content widely used in a major academic medical center” (Lai: Page 153, Paragraph 2 of Conclusions and Future Work). PNG media_image4.png 212 756 media_image4.png Greyscale Screenshot 4 (taken from Fig. 2 of Lai) Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, Padmanabhan, and Robertson (Oracle Application Server Concepts), hereinafter Robertson. Regarding claim 16, Pyo in view of Shekhar teaches the method of claim 14. Pyo does not teach receiving, by a middle tier component that includes a Web server and a wireless application server, inbound HTTP requests and executing appropriate logic in response to the requests. However, Padmanabhan teaches receiving, by a middle tier component that includes a Web server and an application server, inbound HTTP requests and executing appropriate logic in response to the requests (Fig. 1 122, 120; Paragraph 0013-0014, 0047 – “an application program interface (API) server 118 and a web server 120 are coupled to, and provide programmatic and web interfaces respectively to, one or more application servers 122…the web server 120 may send and receive data to and from a toolbar or webpage on a browser application (e.g., web client 110) operating on a client machine (e.g., client machine 106). The API server 118 may send and receive data to and from an application (e.g., client application 112 or third party application 116) running on another client machine (e.g., client machine 108 or third party server 114)…generating the dynamic web sprite may include the web service generating Hypertext Transfer Protocol (HTTP) requests to at least one image hosting server to retrieve a plurality of images corresponding to the plurality of image URIs”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Padmanabhan for the benefit of handling HTTP requests more efficiently and thus enhancing the user experience (Padmanabhan: Paragraph 0003). Furthermore, Pyo modified by Padmanabhan does not teach a wireless application server. However, Robertson teaches a wireless application server (Page 1 of Chapter 4 – “A component of Oracle Application Server, Oracle Application Server Wireless enables enterprises and service providers to efficiently build, manage, and maintain wireless and voice applications”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Robertson to have wireless application servers for the benefit of making applications more accessible to other devices (Robertson: Page 1 of Chapter 4). Regarding claim 17, Pyo in view of Shekhar, Padmanabhan, and Robertson teaches the method of claim 16. Pyo does not teach wherein the execution includes session management, content management, and integration to a back-end system. However, Robertson teaches wherein the execution includes session management (Page 21 of Chapter 2 – “The session manager configures and manages the session as a singleton within the application”; Note: the Oracle application server has a session manager to manage sessions), content management (Page 35 of Chapter 2 – “Oracle Application Server includes the Oracle Content Management Software Development Kit”), and integration to a back-end system (Page 2 of Chapter 6 – “Oracle provides a suite of integration adapters that implement bi-directional connectivity between applications and various back-end systems”; Note: Oracle’s application server provides integration features to connect the front and back end). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Robertson for the benefit of providing support for the application (Robertson: Page 20 of Chapter 2) and enabling fast and flexible integration (Robertson: Page 2 of Chapter 6). Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar and Azad et al. (Combining Appearance-based and Model-based Methods for Real-Time Object Recognition and 6D Localization), hereinafter Azad. Regarding claim 18, Pyo in view of Shekhar teaches the method of claim 14. Pyo does not teach wherein positioning of targeted regions is based on localized objects in each patch produced by an AI segmentation model. However, Azad teaches wherein positioning of targeted regions (Fig. 5, Page 5342 – “Before a segmented region can be used as input for appearance-based calculations it has to be transformed into a normalized representation…the region has to be normalized in size. This is done by resizing the region to a squared window of 64 × 64 pixels”) is based on localized objects in each patch produced by an AI segmentation model (Fig. 4, Page 5343-5344 – “Segmented objects whose edges are visible to a sufficient extent are recognized and localized with a rate of 100%”; Note: Color segmentation model assists in localizing the objects). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Azad for the benefit of “efficient acquisition of object representations and real-time recognition and localization” (Azad: Page 5339). Regarding claim 19, Pyo in view of Shekhar and Azad teaches the method of claim 18. Pyo does not teach using spatial information of objects in the target region of interest in any of the stored patches to generate the MPR. However, Azad teaches using spatial information of objects in the target region of interest in any of the stored patches to generate the MPR (Fig. 10 – A 3D visualization of the localized objects is shown). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Azad for the benefit of further assisting the user in learning more about the objects within the region of interest. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, Azad, Chen, and MedDream (https://www.youtube.com/watch?v=yGcYFT8hMlA) hereinafter MedDream. Regarding claim 20, Pyo in view of Shekhar and Azad teaches the method of claim 19. Pyo does not teach wherein the spatial information includes a panoramic reformat of a CBCT, wherein each object is localized on the reformat and selecting the object opens an MPR in the region of interest. However, Chen teaches wherein the spatial information includes a panoramic reformat of a CBCT (Col. 3 lines 34-38, Fig. 5 and 6A), wherein each object is localized on the reformat (Col. 3 lines 37-38, Fig. 6A – The object of interest is localized by a bounding box). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Chen for the benefit of providing a fuller, unfolded view of teeth with reduced distortion (Chen: Col. 5 lines 24-47). Additionally, localizing the teeth with bound boxes “help to define starting points for algorithmic techniques that detect gaps indicating edges between teeth” (Chen: Col. 8 lines 46-58). Furthermore, Pyo modified by Chen does not teach selecting the object opens an MPR in the region of interest. However, MedDream teaches selecting the object opens an MPR in the region of interest (Screenshots 5 and 6 below – Teaches selecting MPR button to open the MPR). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of MedDream to use a selection feature to open the MPR. Doing so would make the interface more user-friendly by increasing accessibility to an MPR view. PNG media_image5.png 1199 1919 media_image5.png Greyscale Screenshot 5 (taken from https://www.youtube.com/watch?v=yGcYFT8hMlA) PNG media_image6.png 1199 1920 media_image6.png Greyscale Screenshot 6 (taken from https://www.youtube.com/watch?v=yGcYFT8hMlA) Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, Azad, and MedDream. Regarding claim 21, Pyo in view of Shekhar and Azad teaches the method of claim 19. Pyo does not teach wherein the spatial information is a 3D model of the object localized and selecting the object opens an MPR in the region of interest. However, Azad teaches wherein the spatial information is a 3D model of the object localized (Fig. 10 – A 3D visualization of the localized objects is shown). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Azad for the benefit of further assisting the user in learning more about the objects within the region of interest. Furthermore, Pyo modified by Azad does not teach selecting the object opens an MPR in the region of interest. However, MedDream teaches selecting the object opens an MPR in the region of interest (Screenshots 5 and 6 above – Teaches selecting MPR button to open the MPR). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of MedDream to use a selection feature to open the MPR. Doing so would make the interface more user-friendly by increasing accessibility to an MPR view. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Pyo in view of Shekhar, Azad, and Chen. Regarding claim 22, Pyo in view of Shekhar and Azad teaches the method of claim 19. Pyo does not teach wherein the object is at least one of a tooth, a landmark, or maxillofacial organ. However, Chen teaches wherein the object is at least one of a tooth, a landmark, or maxillofacial organ (Fig. 2 – The objects are teeth). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pyo to incorporate the teachings of Chen for the benefit of providing “a better solution for teeth position identification in a three dimensional dental image volume” (Chen: Col. 2 lines 43-45). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Paladini (US 7003175 B2) teaches a method of generating a multi-planar reconstruction by reordering and traversing slices based on object order and interpolating slice pairs. Yamamoto et al. (JP 5205001 B2) teaches a method of displaying MPR based on stored 3D data and generating cross-sectional images. Hadwiger et al. (Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach) teaches a method of processing volumes as image tiles and visualizing the data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE HAU MA whose telephone number is (571)272-2187. The examiner can normally be reached M-Th 7-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE HAU MA/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Apr 19, 2023
Application Filed
Feb 11, 2025
Non-Final Rejection — §103
Mar 04, 2025
Applicant Interview (Telephonic)
Mar 04, 2025
Examiner Interview Summary
Jun 23, 2025
Response Filed
Jul 29, 2025
Final Rejection — §103
Dec 26, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602750
DIFFERENTIABLE EMULATION OF NON-DIFFERENTIABLE IMAGE PROCESSING FOR ADJUSTABLE AND EXPLAINABLE NON-DESTRUCTIVE IMAGE AND VIDEO EDITING
2y 5m to grant Granted Apr 14, 2026
Patent 12597208
BUILDING INFORMATION MODELING SYSTEMS AND METHODS
2y 5m to grant Granted Apr 07, 2026
Patent 12573217
SERVER, METHOD AND COMPUTER PROGRAM FOR GENERATING SPATIAL MODEL FROM PANORAMIC IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12561851
HIGH-RESOLUTION IMAGE GENERATION USING DIFFUSION MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12536734
Dynamic Foveated Point Cloud Rendering System
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+36.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month