Prosecution Insights
Last updated: April 19, 2026
Application No. 18/203,106

IMAGE CLIPPING METHOD AND IMAGE CLIPPING SYSTEM

Final Rejection §103
Filed
May 30, 2023
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Healthcare Co. Ltd.
OA Round
4 (Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the amendment filed on 9th March, 2026. Claims 1, 4, 6, 10, 13, 15-16, and 19 have been amended. Claim 20 has been cancelled. Claims 1-19 and 21 remain rejected in the application. Response to Arguments Applicant's arguments with respect to Claims 1 and 15, filed on 9th March, 2026, with respect to the rejection under 35 U.S.C. § 103 regarding that the prior art does not teach "dividing the original volume data into a plurality of data blocks", "labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block", and "selectively clipping the original volume data to obtain clipped volume data". The proposed amended claim limitations have been fully considered, but are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., "Chino does not involve block-level segmentation of the original volume data", "Chino does not disclose pre-dividing the volume data into a plurality of fixed-size, independently traversable sub-blocks", "Chino cannot realize the global effect of the non-clipping attribute for any arbitrary ROI", "determine the clipping attribute", and "the function of triggering global rules") are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Additionally, the “block-level segmentation”, “pre-dividing volume data into fixed-size, independently traversable sub-blocks”, “global effect”, “clipping attribute”, and “a function to trigger global rules” are never mentioned in the specification. Therefore, applicant’s remark cannot be considered persuasive. In response to applicant's argument that "the mask regions (MR1/MR2) in Chino are independent of each other, which requires separate settings for the clipping strategy of each region", a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. In response to applicant's argument that the prior art does not teach "dividing the original volume data into a plurality of data blocks", "labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block", and "selectively clipping the original volume data to obtain clipped volume data" as recited in Claim 1, these limitations are taught by Seko and Chino. In particular, Chino teaches the following: Paragraph [0084]: discloses a slab setting screen G4 for slab selection <read on selectively clipping> that includes "a name MN of each region (an example of identification information) in volume data <read on original volume data>, a thumbnail image GS indicating display contents for each region, and buttons B1, B2, and B3 are displayed," where "in the settings using the buttons B1 to B3, a user can arbitrarily perform selection through the UI 120," which results in different cut slabs <read on data blocks> being shown and/or hidden as shown in FIG. 14; Paragraph [0084]: discloses "execution or non-execution of a slab process <read on dividing original volume data> for each region, display or non-display, and a rendering color can be set to be in a user's desired state through the setting using the buttons B1 to B3"; it is being interpreted that the slabs are cut from the original volume data; additionally, data blocks are being interpreted broadly as it is unclear if data blocks are a form of segmented memory; and Paragraph [0046]: discloses a region processing unit 161 creating a mask region <read on labeling ROI> for 3D volume data based on user input through UI 120, where "each mask region may be colored in a different color <read on assigning region label>, or an opacity value corresponding to a voxel value may be set," which also corresponds to respective slabs <read on data block>. Additionally, Seko teaches the following: Paragraph [0014]: discloses the 3D-ROI including "a clipping plane which functions as a separating surface or a boundary surface," where "the clipping plane is in particular a rendering start surface, but may be any of other surfaces" of the ultrasonic volume data. Therefore, applicant’s remark cannot be considered persuasive. Applicant's arguments with respect to Claims 1 and 15 filed on 9th March, 2026, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "dividing the original volume data into a plurality of data blocks", "labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block", "selectively clipping the original volume data to obtain clipped volume data", and "generating a rendered image from the clipped volume data based on at least one of color information and transparency information, wherein the color information and the transparency information are determined by grayscale values of the voxels during rendering" have been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Seko, Chino, and Kovtun. Regarding arguments to Claims 2-14, 17-19, and 21, they directly/indirectly depend on independent Claims 1 and 15-16 respectively. Applicant does not argue anything other than independent Claims 1 and 15-16. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Seko et al. (US 20110091086 A1, previously cited), hereinafter referenced as Seko, in view of Chino et al. (US 20200320696 A1, previously cited), hereinafter referenced as Chino. Regarding Claim 1, Seko discloses an image clipping method (Seko, [0078]: teaches a creation method of cross-sectional shapes of the three-dimensional region of interest (3D-ROI) as shown in FIG. 2), comprising: PNG media_image1.png 382 479 media_image1.png Greyscale obtaining original volume data (Seko, [0014]: teaches an ultrasonic volume data <read on original volume data> being obtained by transmission and reception of ultrasound to and from a three-dimensional space in a living body; Note: it should be noted that although the prior art focuses on ultrasonic scanning, the applicant states in paragraph [0042]: "the scanning device obtains the original volume data of the object. The scanning device may be, but is not limited to, various imaging devices used in the medical field, such as a computed tomography (CT) device, a magnetic resonance (MR) device, a positron emission computed tomography (PET) device, an ultrasonic imaging device, an X-ray machine, etc."); [[dividing the original volume data into a plurality of data blocks;]] [[labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block;]] [[selectively]] clipping the original volume data [[to obtain clipped volume data]] (Seko, [0014]: teaches the 3D-ROI including "a clipping plane which functions as a separating surface or a boundary surface," where "the clipping plane is in particular a rendering start surface, but may be any of other surfaces" of the ultrasonic volume data), [[the labeled region of interest being defined as a non-clipping object such that only volume data other than the region of interest is clipped; and]] generating a rendered image based on the [[clipped]] volume data (Seko, [0076]: teaches rendering a 3D image <read on rendered image> on a display based on the 3D ROI, which includes a clipping plane as shown in FIG. 9). PNG media_image2.png 650 249 media_image2.png Greyscale However, Seko does not expressly disclose dividing the original volume data into a plurality of data blocks; labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block; selectively clipping the original volume data to obtain clipped volume data, the labeled region of interest being defined as a non-clipping object such that only volume data other than the region of interest is clipped; and generating a rendered image based on the clipped volume data. Chino discloses dividing the original volume data into a plurality of data blocks (Chino, [0084]: teaches a slab setting screen G4 that includes "a name MN of each region (an example of identification information) in volume data <read on original volume data>, a thumbnail image GS indicating display contents for each region, and buttons B1, B2, and B3 are displayed," where "in the settings using the buttons B1 to B3, a user can arbitrarily perform selection through the UI 120," which results in different cut slabs <read on data blocks> being shown and/or hidden as shown in FIG. 14; [0084]: further teaches "execution or non-execution of a slab process <read on dividing original volume data> for each region, display or non-display, and a rendering color can be set to be in a user's desired state through the setting using the buttons B1 to B3"; Note: it is being interpreted that the slabs are cut from the original volume data; additionally, data blocks are being interpreted broadly as it is unclear if data blocks are a form of segmented memory); PNG media_image3.png 444 371 media_image3.png Greyscale labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block (Chino, [0046]: teaches a region processing unit 161 creating a mask region <read on labeling ROI> for 3D volume data based on user input through UI 120, where "each mask region may be colored in a different color <read on assigning region label>, or an opacity value corresponding to a voxel value may be set," which also corresponds to respective slabs <read on data block>); selectively clipping the original volume data to obtain clipped volume data (Chino, [0084]: teaches a user performing slab selection <read on selectively clipping> through UI 120 for the slab process on a plurality of regions; [0089]: teaches the slab process being performed on a liver region ML and tumor region MT, where the portions of the regions <read on obtained clipped volume data> are displayed), the labeled region of interest being defined as a non-clipping object such that only volume data other than the region of interest is clipped (Chino, [0065]: teaches the system performing a slab process in a mask region (i.e., MR1) of a volume <read on volume data>, where the corresponding voxel data is deleted during the slab process; [0065]: further teaches a separate mask region (i.e., MR2) <read on labeled ROI> in which the slab process is not being performed on, where the corresponding voxel data is not deleted <read on non-clipping object> as shown in FIG. 5); and PNG media_image4.png 415 297 media_image4.png Greyscale generating a rendered image based on the clipped volume data (Chino, FIG. 15 teaches a rendering process on portions <read on clipped volume data> of a liver region and tumor region obtained through cutting an entirety of the liver artery region, portal vein region, and vein region, where the rendered image is displayed). PNG media_image5.png 574 340 media_image5.png Greyscale Chino is analogous art with respect to Seko because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that utilizes both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko. Regarding Claim 15, it recites the limitations that are similar in scope to Claim 1, but in a non-transitory computer readable storage medium. As shown in the rejection, the combination of Seko and Chino discloses the limitations of Claim 1. Additionally, Seko discloses a non-transitory computer readable storage medium storing one or more programs (Seko, [0057]: teaches a controller 36 being formed from a CPU and an operation program, where "a storage unit 38 <read on non-transitory computer readable storage medium> is connected to the controller 36, and the figure image forming units 30, 32, and 34 are realized as functions of software"), the one or more programs comprising instructions, which when executed by one or more processors, cause the one or more processors to (Seko, [0057]: teaches a CPU and an operation program forming controller 36, where "the volume rendering unit 20, the tomographic image forming units 24, 26, and 28, and the figure image forming units 30, 32, and 34 are realized as functions <read on instructions> of software"):… Thus, Claim 15 is met by Seko according to the mapping presented in the rejection of Claim 1, given the image clipping method corresponds to a non-transitory computer readable storage medium. Regarding Claim 2, the combination of Seko and Chino discloses the image clipping method of Claim 1. Seko does not expressly disclose the limitations of Claim 2; however, Chino discloses the labeling the region of interest in the original volume data comprises assigning a region label to each voxel of volume data of the region of interest (Chino, [0046]: teaches a region processing unit 161 creating a mask region <read on labeling ROI> for 3D volume data based on user input through UI 120, where "each mask region may be colored in a different color <read on assigning region label>, or an opacity value corresponding to a voxel value may be set"). Chino is analogous art with respect to Seko because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that utilizes both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko. Regarding Claim 3, the combination of Seko and Chino discloses the image clipping method of Claim 2. Additionally, Seko further discloses wherein the original volume data includes volume data of at least one type of tissue (Seko, [0014]: teaches "the clipping plane is a surface targeted to spatially separate the target tissue <read on type of tissue> for which an image <read on volume data> is to be formed and a non-target tissue for which the image is not to be formed"), [[the labeling the region of interest in the original volume data comprising labeling a tissue of interest in the at least one type of tissue,]] [[the labeling the tissue of interest comprising assigning a tissue label to each voxel of volume data of the tissue of interest.]] However, Seko does not expressly disclose the labeling the region of interest in the original volume data comprising labeling a tissue of interest in the at least one type of tissue, the labeling the tissue of interest comprising assigning a tissue label to each voxel of volume data of the tissue of interest. Chino discloses the labeling the region of interest in the original volume data comprising labeling a tissue of interest in the at least one type of tissue (Chino, [0046]: teaches a region processing unit 161 creating a mask region <read on labeling tissue of interest> for 3D volume data based on user input through UI 120, where "each mask region <read on types of tissue> may be colored in a different color, or an opacity value corresponding to a voxel value may be set"), the labeling the tissue of interest comprising assigning a tissue label to each voxel of volume data of the tissue of interest (Chino, [0046]: teaches a region processing unit 161 creating a mask region for 3D volume data based on user input through UI 120, where "each mask region may be colored in a different color <read on assigning tissue label>, or an opacity value corresponding to a voxel value may be set"). Chino is analogous art with respect to Seko because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that utilizes both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko. Regarding Claim 4, the combination of Seko and Chino discloses the image clipping method of Claim 2. Seko does not expressly disclose the limitations of Claim 4; however, Chino discloses wherein the selectively clipping the original volume data comprises clipping the original volume data using a clipping tool (Chino, [0045]: teaches a user setting a slab region/surface through UI 120 <read on clipping tool> to cut along a volume data slab surface). Chino is analogous art with respect to Seko because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that utilizes both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko. Regarding Claim 5, the combination of Seko and Chino discloses the image clipping method of Claim 4. Additionally, Seko further discloses wherein the clipping tool includes a clipping box having at least one parallel clipping surface parallel to an outer surface of the original volume data or at least one bevel clipping surface intersecting at least three outer surfaces of the original volume data (Seko, [0062]: teaches "a concave clipping plane 60 <read on bevel clipping surface intersection> in the three-dimensional region of interest V can be deformed and inclined," where "the clipping plane 60 corresponds to the rendering start surface" and "the surface on the side opposite the clipping plane 60 is an end surface 62" as shown in FIG. 3; FIG. 8 teaches clipping plane 116 after inclination, where the clipping plane intersects three surfaces <read on three outer surfaces>). PNG media_image6.png 481 414 media_image6.png Greyscale PNG media_image7.png 506 394 media_image7.png Greyscale Regarding Claim 6, the combination of Seko and Chino discloses the image clipping method of Claim 4. Additionally, Seko further discloses wherein the generating the rendered image based on the clipped volume data comprises: determining a front surface and a back surface of the clipped volume data based on a configuration of the clipping tool and the region labels of the region of interest (Seko, [0062]: teaches "the clipping plane 60 corresponds to the rendering start surface <read on front surface>," where "the surface on the side opposite the clipping plane 60 is an end surface 62 <read on back surface>, which is represented as a bottom plane in FIG. 3"; [0058]: teaches "reference numeral 54 represents a tissue for which an image is to be formed <read on region label>" as shown in FIG. 2); and generating the rendered image based on the front surface and the back surface (Seko, [0062]: teaches a 3D image being constructed on screen 68, which includes a clipping plane, a start plane, and an end plane). Regarding Claim 7, the combination of Seko and Chino discloses the image clipping method of Claim 6. Additionally, Seko further discloses wherein the determining the front surface and the back surface of the clipped volume data comprises: obtaining the first intersection point of the clipped volume data and each ray projected to the original volume data (Seko, [0062]: teaches "a plurality of rays 64 are set in parallel to each other along the Y direction" of the data volume; FIG. 3 teaches a starting clipping point <read on first intersection point> for the plurality of rays 64) and PNG media_image8.png 475 408 media_image8.png Greyscale defining the front surface based on multiple first intersection points corresponding to multiple rays (Seko, FIG. 3 teaches a plurality of starting clipping points <read on multiple first intersection points> for each ray 64, which corresponds to the clipping plane, which itself corresponds to the starting surface); and obtaining the last intersection point of the clipped volume data and each ray projected to the original volume data (Seko, FIG. 3 teaches the ending points of the end surface, where the end points corresponds to rays 64) and defining the back surface based on multiple last intersection points corresponding to the multiple rays (Seko, FIG. 3 teaches the ending points of the end surface for each corresponding ray 64, which shows both the start and end surfaces). Claims 8-9, 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Seko et al. (US 20110091086 A1, previously cited), hereinafter referenced as Seko, in view of Chino et al. (US 20200320696 A1, previously cited), hereinafter referenced as Chino as applied to Claim 7 above respectively, and further in view of Petkov (US 20180225862 A1, previously cited), hereinafter referenced as Petkov. Regarding Claim 8, the combination of Seko and Chino discloses the image clipping method of Claim 7. Additionally, Seko further discloses wherein the obtaining the first intersection point of the clipped volume data and each ray projected to the clipped volume data comprises: obtaining an incident point of the ray projected to the original volume data (Seko, [0062]: teaches "a plurality of rays 64 are set in parallel to each other along the Y direction" of the data volume; FIG. 3 teaches a starting clipping point <read on incident point> for the plurality of rays 64; Note: it should be noted that the terms "first intersection point" and "incident point" are being interpreted as the same), querying data information in the original volume data from the incident point until the first region label of the region of interest is encountered (Seko, [0061]: teaches the 3D image 52 is formed "using data <read on querying data information> belonging to the three-dimensional region of interest," where "in the rendering process, a plurality of voxel data sets are sampled on each ray" and "each voxel data set is formed by referring to a plurality of actual data sets existing around sample points and through interpolation"), and recording a voxel coordinate corresponding to the first region label as a coordinate of the first intersection point corresponding to the ray (Seko, [0063]: teaches "a plurality of parameter sets stored <read on recording voxel coordinate> in the storage unit shown in FIG. 1," where "when one of the parameter sets, parameter set 74, is considered, the parameter set 74 includes a coordinate ( X 0 ,   Y 0 ,   Z 0 ) of the origin C of the three-dimensional region of interest, a size ( X W ,   Y W ,   Z W ) of the three-dimensional region of interest, an amount of offset in the Y direction ( Y + ) , a height h of the clipping plane, and inclination angles θ X and θ Z of the clipping plane" as shown in FIG. 4; FIG. 3 teaches a starting clipping point <read on first intersection point> for the plurality of rays 64, where the starting point corresponds to ray 64); and PNG media_image9.png 637 332 media_image9.png Greyscale [[defining an intersection point of the ray and a clipping surface of the clipped volume data as the first intersection point corresponding to the ray in response that no region label of the region of interest is encountered while traversing data information between the incident point and the intersection point of the clipping surface.]] However, the combination of Seko and Chino does not expressly disclose defining an intersection point of the ray and a clipping surface of the clipped volume data as the first intersection point corresponding to the ray in response that no region label of the region of interest is encountered while traversing data information between the incident point and the intersection point of the clipping surface. Petkov discloses defining an intersection point of the ray and a clipping surface of the clipped volume data as the first intersection point corresponding to the ray in response that no region label of the region of interest is encountered while traversing data information between the incident point and the intersection point of the clipping surface (Petkov, [0048]: teaches checking for an intersection <read on defining intersection point as first intersection point> along ray 26 <read on data information between incident and intersection points> as shown in FIG. 2; [0069]: teaches positioning "a clipping plane <read on defining clipping surface> or planes so that different parts of the patient volume are masked," where "the rendering shows the internal region from the scan data not clipped or that intersects the clipping plane"). PNG media_image10.png 182 247 media_image10.png Greyscale Petkov is analogous art with respect to Seko, in view of Chino because they are from the same field of endeavor, namely scanning body tissue. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to check for an intersection along a ray as taught by Petkov into the teaching of Seko, in view of Chino. The suggestion for doing so would allow for detecting internal objects that intersect the ray, thereby allowing for segmenting internal objects of different types. Therefore, it would have been obvious to combine Petkov with Seko, in view of Chino. Regarding Claim 9, the combination of Seko, Chino, and Petkov discloses the image clipping method of Claim 8. Additionally, Seko further discloses wherein obtaining the first intersection point of the clipped volume data and each ray projected to the clipped volume data comprises: defining the incident point as the first intersection point corresponding to the ray in response that a surface of the original volume data corresponding to the incident point is an unclipping surface (Seko, FIG. 3 teaches a plurality of starting clipping points <read on defining incident point as first intersection point> for each ray 64, which corresponds to the clipping plane, which itself corresponds to the starting surface; [0014]: teaches "the clipping plane is a surface targeted to spatially separate the target tissue <read on unclipping surface> for which an image is to be formed and a non-target tissue for which the image is not to be formed"). Regarding Claim 11, the combination of Seko and Chino discloses the image clipping method of Claim 7. Additionally, Seko further discloses wherein the obtaining the last intersection point of the clipped volume data and each ray projected to the original volume data comprises: obtaining an exit point of the ray projected to the original volume data (Seko, [0062]: teaches "a plurality of rays 64 are set in parallel to each other along the Y direction" of the data volume; FIG. 3 teaches an ending clipping point <read on exit point> for the plurality of rays 64), querying data information in the original volume data from the exit point until the first region label of the region of interest is encountered (Seko, [0061]: teaches the 3D image 52 is formed "using data <read on querying data information> belonging to the three-dimensional region of interest," where "in the rendering process, a plurality of voxel data sets are sampled on each ray" and "each voxel data set is formed by referring to a plurality of actual data sets existing around sample points and through interpolation"), and recording a voxel coordinate corresponding to the first region label as a coordinate of the last intersection point corresponding to the ray (Seko, [0063]: teaches "a plurality of parameter sets stored <read on recording voxel coordinate> in the storage unit shown in FIG. 1," where "when one of the parameter sets, parameter set 74, is considered, the parameter set 74 includes a coordinate ( X 0 ,   Y 0 ,   Z 0 ) of the origin C of the three-dimensional region of interest, a size ( X W ,   Y W ,   Z W ) of the three-dimensional region of interest, an amount of offset in the Y direction ( Y + ), a height h of the clipping plane, and inclination angles θ X and θ Z of the clipping plane" as shown in FIG. 4; FIG. 3 teaches an ending clipping point <read on last intersection point> for the plurality of rays 64, where the ending point corresponds to ray 64); and [[defining an intersection point of the ray and a clipping surface of the clipped volume data as the last intersection point corresponding to the ray in response that no region label of the region of interest is encountered while traversing data information between the exit point and the intersection point of the clipping surface.]] However, the combination of Seko and Chino does not expressly disclose defining an intersection point of the ray and a clipping surface of the clipped volume data as the last intersection point corresponding to the ray in response that no region label of the region of interest is encountered while traversing data information between the exit point and the intersection point of the clipping surface. Petkov discloses defining an intersection point of the ray and a clipping surface of the clipped volume data as the last intersection point corresponding to the ray in response that no region label of the region of interest is encountered while traversing data information between the exit point and the intersection point of the clipping surface (Petkov, [0048]: teaches checking for an intersection <read on defining intersection point as last intersection point> along ray 26 <read on data information between incident and intersection points> as shown in FIG. 2; [0069]: teaches positioning "a clipping plane <read on defining clipping surface> or planes so that different parts of the patient volume are masked," where "the rendering shows the internal region from the scan data not clipped or that intersects the clipping plane"). Petkov is analogous art with respect to Seko, in view of Chino because they are from the same field of endeavor, namely scanning body tissue. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to check for an intersection along a ray as taught by Petkov into the teaching of Seko, in view of Chino. The suggestion for doing so would allow for detecting internal objects that intersect the ray, thereby allowing for segmenting internal objects of different types. Therefore, it would have been obvious to combine Petkov with Seko, in view of Chino. Regarding Claim 12, the combination of Seko, Chino, and Petkov discloses the image clipping method of Claim 11. Additionally, Seko further discloses wherein the obtaining the last intersection point of the clipped volume data and each ray projected to the original volume data comprises: defining the exit point as the last intersection point corresponding to the ray in response that a surface of the original volume data corresponding to the exit point of the ray is an unclipping surface (Seko, [0071]: teaches clipping plane 112, where "when the representative point P is determined by the parameter h as described above, a curve 104 is created as a basic line similar to a backbone, through a spline interpolation calculation based on the representative point P and two endpoints P1 and P2 <read on defining exit point as last intersection point>" and "the end point P1 is the point a12 described above and the end point P2 is the point a34 described above" as shown in FIG. 6; [0014]: teaches "the clipping plane is a surface targeted to spatially separate the target tissue <read on unclipping surface> for which an image is to be formed and a non-target tissue for which the image is not to be formed"). Claims 10 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Seko et al. (US 20110091086 A1, previously cited), hereinafter referenced as Seko, in view of Chino et al. (US 20200320696 A1, previously cited), hereinafter referenced as Chino, and further in view of Petkov (US 20180225862 A1, previously cited), hereinafter referenced as Petkov as applied to Claims 8 and 11 above respectively, and further in view of Sirohey et al. (US 20070014480 A1, previously cited), hereinafter referenced as Sirohey. Regarding Claim 10, the combination of Seko, Chino, and Petkov discloses the image clipping method of Claim 8. Additionally, Seko further discloses wherein the querying data information in the original volume data from the incident point until the first region label of the region of interest is encountered (Seko, [0061]: teaches the 3D image 52 is formed "using data <read on querying data information> belonging to the three-dimensional region of interest," where "in the rendering process, a plurality of voxel data sets are sampled on each ray" and "each voxel data set is formed by referring to a plurality of actual data sets existing around sample points and through interpolation") comprises: [[traversing the data blocks of the original volume data from the incident point until the first data block comprising any of the region labels of the region of interest is encountered; and]] [[traversing voxels in the first data block comprising any of the region labels until the first region label of the region of interest is encountered.]] However, the combination of Seko, Chino, and Petkov does not expressly disclose traversing the data blocks of the original volume data from the incident point until the first data block comprising any of the region labels of the region of interest is encountered; and traversing voxels in the first data block comprising any of the region labels until the first region label of the region of interest is encountered. Sirohey discloses traversing the data blocks of the original volume data from the incident point until the first data block comprising any of the region labels of the region of interest is encountered (Sirohey, [0028]: teaches volume data 100 being stored "as a series of values representative of voxels in data blocks <read on traversing data blocks of original volume data>," where "the volume 100 is logically divided into eight subsets of data, indicated by letters a-h" as shown in FIG. 3); and PNG media_image11.png 562 423 media_image11.png Greyscale traversing voxels in the first data block comprising any of the region labels until the first region label of the region of interest is encountered (Sirohey, [0049]: teaches data being stored as volume blocks of data "which are concatenated with one another to form a multi-resolution data stream, progressively getting to full resolution"; [0050]: teaches an example data structure of a data block, where "after the file header, the first data element of the file may be the first voxel of the first slice, row ordered to the last voxel of the first slice <read on traversing voxels in first data block>"). Sirohey is analogous art with respect to the combination of Seko, Chino, and Petkov because they are from the same field of endeavor, namely scanning tissue to form a tomographic image. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement data blocks to store subsets of voxels as taught by Sirohey into the combined teaching of Seko, Chino, and Petkov. The suggestion for doing so would allow for more efficient data storage. Therefore, it would have been obvious to combine Sirohey with the combination of Seko, Chino, and Petkov. Regarding Claim 13, the combination of Seko, Chino, and Petkov discloses the image clipping method of Claim 11. Additionally, Seko further discloses wherein the querying data information in the original volume data from the exit point until the first region label of the region of interest is encountered (Seko, [0061]: teaches the 3D image 52 is formed "using data <read on querying data information> belonging to the three-dimensional region of interest," where "in the rendering process, a plurality of voxel data sets are sampled on each ray" and "each voxel data set is formed by referring to a plurality of actual data sets existing around sample points and through interpolation") comprises: [[traversing the data blocks of the original volume data from the exit point until the first data block comprising any of the region labels of the region of interest is encountered; and]] [[traversing voxels in the first data block comprising any of the region labels until the first region label of the region of interest is encountered.]] However, the combination of Seko, Chino, and Petkov does not expressly disclose traversing the data blocks of the original volume data from the exit point until the first data block comprising any of the region labels of the region of interest is encountered; and traversing voxels in the first data block comprising any of the region labels until the first region label of the region of interest is encountered. Sirohey discloses traversing the data blocks of the original volume data from the exit point until the first data block comprising any of the region labels of the region of interest is encountered (Sirohey, [0028]: teaches volume data 100 being stored "as a series of values representative of voxels in data blocks <read on traversing data blocks of original volume data>," where "the volume 100 is logically divided into eight subsets of data, indicated by letters a-h" as shown in FIG. 3); and traversing voxels in the first data block comprising any of the region labels until the first region label of the region of interest is encountered (Sirohey, [0049]: teaches data being stored as volume blocks of data "which are concatenated with one another to form a multi-resolution data stream, progressively getting to full resolution"; [0050]: teaches an example data structure of a data block, where "after the file header, the first data element of the file may be the first voxel of the first slice, row ordered to the last voxel of the first slice <read on traversing voxels in first data block>"; [0077]: teaches acquiring tracer datasets of specific ROIs <read on traversing voxels until first region label of region of interest is encountered>). Sirohey is analogous art with respect to the combination of Seko, Chino, and Petkov because they are from the same field of endeavor, namely scanning tissue to form a tomographic image. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement data blocks to store subsets of voxels as taught by Sirohey into the combined teaching of Seko, Chino, and Petkov. The suggestion for doing so would allow for more efficient data storage. Therefore, it would have been obvious to combine Sirohey with the combination of Seko, Chino, and Petkov. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Seko et al. (US 20110091086 A1, previously cited), hereinafter referenced as Seko, in view of Chino et al. (US 20200320696 A1, previously cited), hereinafter referenced as Chino as applied to Claim 6 above respectively, and further in view of Salomie (US 20060290695 A1, previously cited). Regarding Claim 14, the combination of Seko and Chino discloses the image clipping method of Claim 6. Additionally, Seko further discloses wherein the generating the rendered image based on the front surface and the back surface comprises: traversing, along a ray path, voxels on the ray path from the front surface (Seko, [0062]: teaches "the amount of output light at the time of completion of the voxel calculation <read on voxels on the ray path> becomes the pixel value," where "the pixel value of a pixel 70 corresponding to a ray 64 <read on ray path> on a virtual screen 68 is determined as a final amount of output light determined for the ray 64" as shown in FIG. 3; FIG. 3 teaches rays 64 intersecting the starting surface 60), and [[setting eligible voxels to be visible; and]] [[generating the rendered image based on at least one of color information and transparency information of the visible voxels on the ray path.]] However, the combination of Seko and Chino does not expressly disclose setting eligible voxels to be visible; and generating the rendered image based on at least one of color information and transparency information of the visible voxels on the ray path. Salomie discloses setting eligible voxels to be visible (Salomie, [0451]: teaches "the voxels <read on eligible voxels> are displayed as squares with their centers marked by dots 47," where "the path followed by the border-tracking algorithm is visible as a dotted line 48 with arrows showing the tracking direction" as shown in FIGS. 39A-39D); and PNG media_image12.png 149 398 media_image12.png Greyscale generating the rendered image based on at least one of color information and transparency information of the visible voxels on the ray path (Salomie, [0638]: teaches cross-section 59 being filled with "gray color <read on color information> and in transparency mode <read on transparency information>," which "is overlaid on voxel layer 60"). Salomie is analogous art with respect to Seko, in view of Chino because they are from the same field of endeavor, namely scanning tissue for tomography. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to identify voxels of cross-sections with transparency mode as taught by Salomie into the teaching of Seko, in view of Chino. The suggestion for doing so would allow for viewing select internal objects with relation to other internal objects in a scanned body, thereby simplifying identification and classification of tissue/organs. Therefore, it would have been obvious to combine Salomie with Seko, in view of Chino. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Seko et al. (US 20110091086 A1, previously cited), hereinafter referenced as Seko, in view of Chino et al. (US 20200320696 A1, previously cited), hereinafter referenced as Chino as applied to Claim 1 above respectively, and further in view of McDermott et al. (US 20130329978 A1, previously cited), hereinafter referenced as McDermott. Regarding Claim 21, the combination of Seko and Chino discloses the image clipping method of Claim 1. The combination of Seko and Chino does not expressly disclose the limitations of Claim 21; however, McDermott discloses wherein the clipping the original volume data comprises clipping the original volume data using a clipping box having at least one planar surface (McDermott, [0044]: teaches a volumetric clipping shape defining a sub-volume, where a volumetric clipping shape, such as a rectangular prism, is represented by boxes 40 in the planar images <read on planar surface> indicating a sub-volume as shown in FIG. 3; FIG. 3 teaches a cropping tool to apply a volumetric clipping shape to a scanned volume <read on clipping original volume data> to indicate a sub-volume). PNG media_image13.png 282 374 media_image13.png Greyscale McDermott is analogous art with respect to Seko, in view of Chino because they are from the same field of endeavor, namely clipping volumetric image data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a cropping tool for volumetric clipping shape application of a scanned volume as taught by McDermott into the teaching of Seko, in view of Chino. The suggestion for doing so would offer the user multiple choices regarding where to identify a sub-volume, thereby offering a flexible image viewing system. Therefore, it would have been obvious to combine McDermott with Seko, in view of Chino. Claims 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Seko et al. (US 20110091086 A1, previously cited), hereinafter referenced as Seko, in view of Chino et al. (US 20200320696 A1, previously cited), hereinafter referenced as Chino, and further in view of Kovtun et al. (US 20200054398 A1), hereinafter referenced as Kovtun. Regarding Claim 16, Seko discloses a computer system (Seko, [0049]: teaches an ultrasonic diagnostic apparatus <read on computer system>, which "is used in the medical field, and has a function to form a three-dimensional image of a tissue within a living body by transmitting and receiving ultrasound") comprising a memory having instructions stored thereon (Seko, [0057]: teaches a controller 36 being formed from a CPU and an operation program, where "a storage unit 38 <read on memory> is connected to the controller 36, and the figure image forming units 30, 32, and 34 are realized as functions <read on instructions> of software") and a processor, wherein when executing the instructions, the processor is configured to perform an image clipping method (Seko, [0057]: teaches a controller 36 being formed from a CPU <read on processor> and an operation program, where "a storage unit 38 is connected to the controller 36, and the figure image forming units 30, 32, and 34 are realized as functions <read on instructions> of software <read on image clipping method>"), the method comprising: obtaining original volume data (Seko, [0014]: teaches an ultrasonic volume data <read on original volume data> being obtained by transmission and reception of ultrasound to and from a three-dimensional space in a living body); [[dividing the original volume data into a plurality of data blocks;]] [[labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block;]] [[selectively]] clipping the original volume data [[to obtain clipped volume data]] (Seko, [0014]: teaches the 3D-ROI including "a clipping plane which functions as a separating surface or a boundary surface," where "the clipping plane is in particular a rendering start surface, but may be any of other surfaces" of the ultrasonic volume data), [[the labeled region of interest being defined as a non-clipping object such that only volume data other than the region of interest is clipped; and]] generating a rendered image from the [[clipped]] volume data [[based on at least one of color information and transparency information]] (Seko, [0076]: teaches rendering a 3D image <read on rendered image> on a display based on the 3D ROI, which includes a clipping plane as shown in FIG. 9), wherein [[the color information and the transparency information are determined by grayscale values of the voxels during rendering.]] However, Seko does not expressly disclose dividing the original volume data into a plurality of data blocks; labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block; selectively clipping the original volume data to obtain clipped volume data, the labeled region of interest being defined as a non-clipping object such that only volume data other than the region of interest is clipped; and generating a rendered image from the clipped volume data based on at least one of color information and transparency information, wherein the color information and the transparency information are determined by grayscale values of the voxels during rendering. Chino discloses dividing the original volume data into a plurality of data blocks (Chino, [0084]: teaches a slab setting screen G4 that includes "a name MN of each region (an example of identification information) in volume data <read on original volume data>, a thumbnail image GS indicating display contents for each region, and buttons B1, B2, and B3 are displayed," where "in the settings using the buttons B1 to B3, a user can arbitrarily perform selection through the UI 120," which results in different cut slabs <read on data blocks> being shown and/or hidden as shown in FIG. 14; [0084]: further teaches "execution or non-execution of a slab process <read on dividing original volume data> for each region, display or non-display, and a rendering color can be set to be in a user's desired state through the setting using the buttons B1 to B3"; Note: it is being interpreted that the slabs are cut from the original volume data; additionally, data blocks are being interpreted broadly as it is unclear if data blocks are a form of segmented memory); labeling a region of interest in the original volume data by assigning a region label to each voxel in each data block (Chino, [0046]: teaches a region processing unit 161 creating a mask region <read on labeling ROI> for 3D volume data based on user input through UI 120, where "each mask region may be colored in a different color <read on assigning region label>, or an opacity value corresponding to a voxel value may be set," which also corresponds to respective slabs <read on data block>); selectively clipping the original volume data to obtain clipped volume data (Chino, [0084]: teaches a user performing slab selection <read on selectively clipping> through UI 120 for the slab process on a plurality of regions; [0089]: teaches the slab process being performed on a liver region ML and tumor region MT, where the portions of the regions <read on obtained clipped volume data> are displayed), the labeled region of interest being defined as a non-clipping object such that only volume data other than the region of interest is clipped (Chino, [0065]: teaches the system performing a slab process in a mask region (i.e., MR1) of a volume <read on volume data>, where the corresponding voxel data is deleted during the slab process; [0065]: further teaches a separate mask region (i.e., MR2) <read on labeled ROI> in which the slab process is not being performed on, where the corresponding voxel data is not deleted <read on non-clipping object> as shown in FIG. 5); and generating a rendered image from the clipped volume data based on at least one of color information and transparency information (Chino, FIG. 15 teaches a rendering process on portions <read on clipped volume data> of a liver region and tumor region obtained through cutting an entirety of the liver artery region, portal vein region, and vein region, where the rendered image is displayed; [0046]: teaches the region processing unit 161 setting a mask region, where "each region may be colored in a different color <read on color information>, or an opacity value <read on transparency information> corresponding to a voxel value may be set"), wherein [[the color information and the transparency information are determined by grayscale values of the voxels during rendering.]] Chino is analogous art with respect to Seko because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that utilizes both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko. However, the combination of Seko and Chino does not expressly disclose the color information and the transparency information are determined by grayscale values of the voxels during rendering. Kovtun discloses the color information and the transparency information are determined by grayscale values of the voxels during rendering (Kovtun, [0144]: teaches process 700 presenting multiple user interfaces to independently control filtering based on different criteria, where the user interface can facilitate control (based on the underlying value of the voxel) of the range of voxel values to include and/or exclude from rendering, where it facilitates mapping of voxel values to grayscale values to determine how each voxel is rendered <read on during rendering>; Note: it should be noted that although the voxel values are selected prior to rendering, the values are used). Kovtun is analogous art with respect to Seko, in view of Chino because they are from the same field of endeavor, namely processing 3D medical imaging data. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that allows control over voxel values as taught by Kovtun into the teaching of Seko, in view of Chino. The suggestion for doing so would allow the user to include and/or exclude grayscale values during rendering, thereby offering further refinement over what type of content to be displayed and improving overall usability. Therefore, it would have been obvious to combine Kovtun with Seko, in view of Chino. Regarding Claim 17, the combination of Seko, Chino, and Kovtun discloses the computer system of Claim 16. The combination of Seko and Kovtun does not expressly disclose the limitations of Claim 17; however, Chino discloses wherein the labeling the region of interest in the original volume data comprises assigning a region label to each voxel of volume data of the region of interest (Chino, [0046]: teaches a region processing unit 161 creating a mask region <read on labeling ROI> for 3D volume data based on user input through UI 120, where "each mask region may be colored in a different color <read on assigning region label>, or an opacity value corresponding to a voxel value may be set"). Chino is analogous art with respect to Seko, in view of Kovtun because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a user interface that utilizes both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko, in view of Kovtun. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko, in view of Kovtun. Regarding Claim 18, the combination of Seko, Chino, and Kovtun discloses the computer system of Claim 16. Additionally, Seko further discloses wherein the labeling the region of interest in the original volume data comprises [[labeling the region of interest using a clipping tool, and]] the clipping tool includes a clipping box having at least one parallel clipping surface parallel to an outer surface of the original volume data (Seko, [0062]: teaches "for a three-dimensional region of interest V, a plurality of rays 64 are set in parallel to each other along the Y direction," where "a clipping plane 60 in the three-dimensional region of interest V can be deformed and inclined," in which "the clipping plane 60 corresponds to the rendering start surface <read on parallel clipping surface being parallel to outer surface>" and "the surface on the side opposite the clipping plane 60 is an end surface 62" as shown in FIG. 3). However, the combination of Seko and Kovtun does not expressly disclose labeling the region of interest using a clipping tool. Chino discloses labeling the region of interest using a clipping tool (Chino, [0045]: teaches a user setting a slab region/surface through UI 120 <read on clipping tool> to cut along a volume data slab surface). Chino is analogous art with respect to Seko, in view of Kovtun because they are from the same field of endeavor, namely generating clipping planes for volumetric data of anatomical regions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement both a region processing unit and slab control unit for masking and slab processes respectively as taught by Chino into the teaching of Seko, in view of Kovtun. The suggestion for doing so would allow the anatomical 3D model to have multiple identified areas that allows for user-specific clipping, resulting in one or more regions to be clipped or not, thereby yielding improved results. Therefore, it would have been obvious to combine Chino with Seko, in view of Kovtun. Regarding Claim 19, the combination of Seko, Chino, and Kovtun discloses the computer system of Claim 18. Additionally, Seko further discloses wherein the generating the rendered image based on the clipped volume data comprises: determining a front surface and a back surface of the clipped volume data based on a configuration of the clipping tool and the region labels of the region of interest (Seko, [0062]: teaches "the clipping plane 60 corresponds to the rendering start surface <read on front surface>," where "the surface on the side opposite the clipping plane 60 is an end surface 62 <read on back surface>, which is represented as a bottom plane in FIG. 3"; [0058]: teaches "reference numeral 54 represents a tissue for which an image is to be formed <read on region label>" as shown in FIG. 2); and generating the rendered image based on the front surface and the back surface (Seko, [0062]: teaches a 3D image being constructed on screen 68, which includes a clipping plane, a start plane, and an end plane). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Petkov et al. (US 20230252629 A1) discloses a layout of labels for annotating locations of a plurality of regions of interest in a rendered image; Reynolds et al. (US 20160180525 A1) discloses an image data processing apparatus that receives medical images representative of a region of subject, where grayscale values are used; Smith-Casem et al. (US 20130328874 A1) discloses volume rendering with a clipping surface; and Xiang et al. (US 20180349724 A1) discloses clipping volume data of a target anatomy of interest. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 30, 2023
Application Filed
Mar 24, 2025
Non-Final Rejection — §103
Jul 03, 2025
Response Filed
Jul 14, 2025
Final Rejection — §103
Oct 24, 2025
Request for Continued Examination
Nov 03, 2025
Response after Non-Final Action
Dec 02, 2025
Non-Final Rejection — §103
Mar 09, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month