DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
1. Applicant’s election without traverse of Species I, namely claims 1-2, 4-5, and 8, drawn to a method of rendering an image, in the reply filed on 12/19/2025 is acknowledged. In addition, examiner has considered applicant's argument and agrees claims 3, 6, and 7 are consonant with Species I and will be considered. Therefore, claims 1-8 are currently under consideration and claims 9-22 are withdrawn.
Information Disclosure Statement
2. The information disclosure statement (IDS) submitted on 6/18/2024 has been fully considered by the examiner. A copy which has been signed, initialed and dated is included with the present action. The NPL references "Projected size of the global autonomous car market from 2019 to 2023 (in billion U.S. dollars)" and "Problems: Algorithms, Software and Applications in Petascale Computing" have been lined through since there are no corresponding NPL document submissions.
Drawings
3. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 204 in Fig. 2 and 802 in Fig. 8. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 103
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1-2 are rejected under 35 U.S.C. 103 as being unpatentable over Persson (US-2018/0252657-A1) in view of Gao et al. (CN-112085066-A, hereinafter "Gao").
6. As per claim 1, Persson discloses: A method of rendering an image, the method comprising:
performing a convolution of a kernel function located at each of a plurality of source locations and [[weighted by input weights;]] (Persson, [0137], “In the notation employed here, it is the averaged ordinary basis coefficient line integrals Āi which are sought, since these form basis sinograms which are the result of a convolution of the original sinograms with a blurring kernel. When reconstructed, these yield basis images which are linear convolutions of the true basis images with a blurring kernel, meaning that there are no artifacts stemming from the nonlinearity of the partial volume effect in such an image.” and [0107], “The result of this decomposition is a set of estimated line integrals Â1 and ÂNLPV for the projection lines from the source to every detector element.”)
storing, as a result of the convolution, for each voxel in a three dimensional (3D) space, coefficients of a series expansion; (Persson, [0137], “In the notation employed here, it is the averaged ordinary basis coefficient line integrals Āi which are sought, since these form basis sinograms which are the result of a convolution of the original sinograms with a blurring kernel. When reconstructed, these yield basis images which are linear convolutions of the true basis images with a blurring kernel, meaning that there are no artifacts stemming from the nonlinearity of the partial volume effect in such an image.” and [0121], “In another embodiment of the invention, CT image reconstruction and basis material decomposition may be carried out simultaneously by letting the ordinary and NLPV basis coefficients ai and aNLPV in every voxel in the image volume be the unknown parameters and estimating these from the measurements in all energy bins for all projection lines by a statistical estimation method …”)
calculating line integrals along a ray in the 3D space using the coefficients of the series expansion in voxels along at least a portion of the ray; and (Persson, [0147]-[0148], “Step S33: The third step is to estimate the basis images a1, . . . , aN iteratively from the decomposed basis coefficient line integrals. To this end, the image volume is discretized and each the beam from the source to each detector element is approximated by a number of subrays from different points on the source to different points on the detector element. ... The input to the mapping constructed in the first step above does not have to include first moments but can in general be a representation of the spatially variant basis coefficient line integrals A1(x,y), . . . , AN(x,y), by which is meant a collection of data which allows inferring information about the spatial distribution of the basis coefficient line integrals. This representation could be a set of basis coefficient line integrals along selected subrays, or a set of coefficients in a series expansion of the spatial distribution of a basis coefficient line integral as linear combination of some sort of spatial basis functions, such as sinusoidal functions or orthogonal polynomials.” and [0121], “In another embodiment of the invention, CT image reconstruction and basis material decomposition may be carried out simultaneously by letting the ordinary and NLPV basis coefficients ai and aNLPV in every voxel in the image volume be the unknown parameters and estimating these from the measurements in all energy bins for all projection lines by a statistical estimation method …”)
rendering the image based, at least in part, on the line integrals. (Persson, [0012], “When the resulting estimated basis coefficient line integral  for each projection line is arranged into an image matrix, the result is a material specific projection image, also called a basis image, for each basis i. This basis image can either be viewed directly (in projection x-ray imaging) or taken as input to a reconstruction algorithm to form maps of ai inside the object (in CT).” and [0150], “It is desirable to do this iterative reconstruction on a voxel grid which is finer than the ones typically used for the same source and detector element size, since this method uses the information in the NLPV basis coefficient line integrals to improve spatial resolution. One way of constructing a finer grid is by simply subdividing each voxel into subvoxels using a cartesian grid. Since a very fine voxel grid may cause image reconstruction to become prohibitively time-consuming, it may be desirable to restrict this subsampling to an ROI consisting of the immediate surroundings of the interface that is to be located, which can be identified by first reconstructing a normal CT image from the data.”)
7. Persson doesn't explicitly disclose but Gao discloses: [[performing a convolution of a kernel function located at each of a plurality of source locations and]] weighted by input weights; (Gao, page 3, [0006]-[0011], “The voxelized 3D point cloud scene classification method based on graph convolutional neural network of the present invention includes the following steps: (1)Transform the three-dimensional space coordinates of the point cloud obtained by the vision sensor with the T-net network in PointNet, and then voxelize the point cloud transformed by the T-net network; (2)Weight the information of the neighboring points of each point in each voxel to the point, and obtain the feature vector of each point; ... In step (2), the spatial position information of multiple points adjacent to each point in each voxel is weighted multiple times to the point by fusing PointAtrousNet and local spectral convolution kernel.”)
8. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of Persson to include the disclosure of performing a convolution of a kernel function with source locations weighted by input weights, of Gao. The motivation for this modification could have been to use weights to prioritize locations of an object to focus the convolution process on those areas. This could potentially save processing resources by only focusing on space that contains an object.
9. As per claim 2, Persson in view of Gao discloses: The method of claim 1, wherein the performing the convolution is based on one or more images of an object in the image. (Persson, [0137], “In the notation employed here, it is the averaged ordinary basis coefficient line integrals Āi which are sought, since these form basis sinograms which are the result of a convolution of the original sinograms with a blurring kernel. When reconstructed, these yield basis images which are linear convolutions of the true basis images with a blurring kernel, meaning that there are no artifacts stemming from the nonlinearity of the partial volume effect in such an image.” and [0059], “As illustrated in the example of FIG. 4, an x-ray imaging system comprises an x-ray source, which emits x-rays; a detector, which detects the x-rays after they have passed through the object; analog processing circuitry, which processes the raw electrical signal from the detector and digitizes it …” and [0057], “In the following, it will be described how this example framework can be used in practice to identify interfaces in the imaged volume.”)
10. Claims 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Persson (US-2018/0252657-A1) in view of Gao et al. (CN-112085066-A, hereinafter "Gao"), and further in view of Slabaugh et al. (US-2010/0220913-A1, hereinafter "Slabaugh").
11. As per claim 3, Persson in view of Gao discloses: The method of claim 1, further comprising: (See rejection for claim 1.)
12. Persson in view of Gao doesn't explicitly disclose but Slabaugh discloses: providing an array configured to allow a process to query function values and partial derivatives of the functions of the kernel function. (Slabaugh, [0060]-[0061], “A discrete Gaussian kernel and its derivatives can be calculated by directly sampling them from their continuous counterparts. Once the partial derivatives of the image data are computed by convolution with the approximate Gaussian kernel and its derivatives, the invariant/semi-invariant measures can be easily calculated as described below. Alternatively, the derivatives of the image can be calculated by a known numerical differentiation method. For example, the central difference method can be used to calculate derivatives of the image based upon the intensity values at voxels neighbouring the voxel of interest.”)
13. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Persson in view of Gao to include the disclosure of providing an array configured to allow a process to query function values and partial derivatives of the functions of the kernel function, of Slabaugh. The motivation for this modification could have been to use the array to quickly process partial derivatives for determining details about the object.
14. As per claim 4, Persson in view of Gao, and further in view of Slabaugh discloses: The method of claim 3, wherein the array includes one or more spatial dimensions configured to allow a process to query data of the image at continuous locations. (Slabaugh, [0060]-[0061], “By way of explanation, the image intensity function I is a discrete, noisy data set obtained by sampling the underlying continuous data. ... A discrete Gaussian kernel and its derivatives can be calculated by directly sampling them from their continuous counterparts. Once the partial derivatives of the image data are computed by convolution with the approximate Gaussian kernel and its derivatives, the invariant/semi-invariant measures can be easily calculated as described below. Alternatively, the derivatives of the image can be calculated by a known numerical differentiation method. For example, the central difference method can be used to calculate derivatives of the image based upon the intensity values at voxels neighbouring the voxel of interest.” and [0013], “Preferably the term “derivatives of an image” is to be understood to refer to the spatial derivatives (that is, the derivatives with respect to a particular direction) of the intensity values within the image. Preferably, the invariant features characterise variations in local curvature at the point in question.”)
15. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 3 of Persson in view of Gao to include the disclosure of the array including one or more spatial dimensions configured to allow a process to query data of the image at continuous locations, of Slabaugh. The motivation for this modification could have been to achieve higher quality representation of the object by querying at continuous locations rather than discrete ones.
16. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Persson (US-2018/0252657-A1) in view of Gao et al. (CN-112085066-A, hereinafter "Gao"), further in view of Slabaugh et al. (US-2010/0220913-A1, hereinafter "Slabaugh"), and further in view of Chen (US-2015/0071554-A1).
17. As per claim 5, Persson in view of Gao, and further in view of Slabaugh discloses: The method of claim 4, further comprising [[forming a meshgrid of individual inputs]] to the one or more spatial dimensions of a volume of the data. (Persson, [0148], “The input to the mapping constructed in the first step above does not have to include first moments but can in general be a representation of the spatially variant basis coefficient line integrals A1(x,y), . . . , AN(x,y), by which is meant a collection of data which allows inferring information about the spatial distribution of the basis coefficient line integrals. This representation could be a set of basis coefficient line integrals along selected subrays, or a set of coefficients in a series expansion of the spatial distribution of a basis coefficient line integral as linear combination of some sort of spatial basis functions, such as sinusoidal functions or orthogonal polynomials.”)
18. Persson in view of Gao, and further in view of Slabaugh doesn't explicitly disclose but Chen discloses: [[The method of claim 4, further comprising]] forming a meshgrid of individual inputs [[to the one or more spatial dimensions of a volume of the data.]] (Chen, [0013], “The input receiving module 10 can receive parameters and a conversion equation for converting the parameters to vectors input by the user via the input device 103. In least one embodiment, the parameters are space coordinates of a number of pixels of an image to be processed, for example, the parameters are X, Y, Z coordinates of a number of pixels of an image. In the embodiment, the conversion equation can be a function named as “meshgrid” of the MATLAB software.”)
19. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 4 of Persson in view of Gao, and further in view of Slabaugh to include the disclosure of forming a meshgrid of individual inputs, of Chen. The motivation for this modification could have been to conveniently combine the inputs into a format that when processed is able to potentially account for multiple spatial dimensions of the volume data.
20. Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Persson (US-2018/0252657-A1) in view of Gao et al. (CN-112085066-A, hereinafter "Gao"), further in view of Hansson Soederlund et al. (US-2023/0060308-A1, hereinafter "Hansson Soederlund"), and further in view of Liu et al. (CN-111476888-A, hereinafter "Liu").
21. As per claim 6, Persson in view of Gao discloses: The method of claim 1, further comprising: (See rejection for claim 1.)
22. Persson in view of Gao doesn't explicitly disclose but Hansson Soederlund discloses: finding a root along the ray intersecting the voxels; and (Hansson Soederlund, [0078], “As described, after determining the intersection between the ray and the voxel, the PPU 202 can compute the distance, t, of an intersection of the ray with the surface of an object that is defined within the voxel using equation (7) and the constants in equations (3) and (8)-(9). Assuming the object is solid (i.e., not semi-transparent and not a volumetric object such as a cloud), the only solution that is required is the first real root of the cubic function of equation (7) inside the voxel, i.e., the first real root with t E [0,tfar]. In some embodiments, the cubic function can be solved for the first real root in any technically feasible manner.”)
23. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Persson in view of Gao to include the disclosure of finding a root along the ray intersecting the voxels, of Hansson Soederlund. The motivation for this modification could have been to use the root to help find the surface and surface normals of an object.
24. Persson in view of Gao, and further in view of Hansson Soederlund doesn't explicitly disclose but Liu discloses: converting 3D Taylor expansions represented by the voxels into univariate polynomials. (Liu, page 12, [0116], “Since F(x,y,z) can be any polynomial function, it is difficult to accurately construct F(x,y,z) only by relying on the original image sequence, and only an approximation of the original three-dimensional space body fi,j,k(x,y,z). According to numerical approximation theory, any continuous function ∂(x, y, z) can be expanded into a polynomial Taylor series at a point (xa, ya, za) in a continuous three-dimensional space, and the Taylor series is at the point (xa, ya, za) Approximation of ∂(x, y, z) in the spatial neighborhood. Assume that F(x,y,z) can be approximated by the first three terms of Taylor series in the spatial neighborhood of each voxel point, that is, the spatial volume has the accuracy of quadratic polynomial approximation.”)
25. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Persson in view of Gao, and further in view of Hansson Soederlund to include the disclosure of converting 3D Taylor expansions represented by the voxels into univariate polynomials, of Liu. The motivation for this modification could have been to transform the 3D Taylor expansions into polynomials that have solutions, simplifying the process.
26. As per claim 7, Persson in view of Gao further in view of Hansson Soederlund, and further in view of Liu discloses: The method of claim 6, further comprising:
providing a surface gradient that is a scalar-multiple of a surface normal at each root, wherein the roots are configured to define a surface of the object. (Hansson Soederlund, [0083]-[0084], “By interpolating normals to the surfaces of neighboring voxels that are computed analytically, the PPU 202 can generate (interpolated) normals that are continuous across voxels. Such normals can then be used to render images with lighting that change relatively smoothly on the surfaces of objects. To compute a surface normal analytically, a normal vector n can be computed as the gradient of an implicit function ƒ defining the surface of an object within a voxel, i.e. n = (∂f/∂x, ∂f/∂y, ∂f/∂z).”)
27. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 6 of Persson in view of Gao, and further in view of Liu to include the disclosure of providing a surface gradient that is a scalar-multiple of a surface normal at each root, wherein the roots are configured to define a surface of the object, of Hansson Soederlund. The motivation for this modification could have been to use the surface gradient as a representation of the object’s surface. When the surface of the object is defined, it will allow for proper visual shading of the object.
28. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Persson (US-2018/0252657-A1) in view of Gao et al. (CN-112085066-A, hereinafter "Gao"), further in view of Hansson Soederlund et al. (US-2023/0060308-A1, hereinafter "Hansson Soederlund"), further in view of Liu et al. (CN-111476888-A, hereinafter "Liu"), and further in view of Fu et al. (US-2017/0064305-A1, hereinafter "Fu").
29. As per claim 8, Persson in view of Gao further in view of Hansson Soederlund, and further in view of Liu discloses: The method of claim 7, further comprising: (See rejection for claim 7.)
30. Persson in view of Gao, further in view of Hansson Soederlund, and further in view of Liu doesn't explicitly disclose but Fu discloses: extracting a coherent 3D representation from images collected with a depth camera, said extracting comprising: (Fu, [0015], “This disclosure describes, in part, techniques for improving filtering and compression of depth images. As illustrated in FIG. 1 at 102, a depth camera may capture depth images and corresponding texture images of a location, the depth images representing a three-dimensional description of the location, including depth values of objects and a background of the location.” and [0051], “As a typical representation of range data, measured depth can be utilized to reconstruct 3D object surfaces. Since the 3D surfaces are generated upon tremendous data accumulation, the random error of data can be corrected during the reconstruction. Among various surface reconstruction techniques, the volumetric integration is widely applied for surface reconstruction upon range data.”)
collecting, using the depth camera, a distance to at least one object; and (Fu, [0051], “Since the depth is the distance between the object and the baseline instead of depth camera 208, the upper assumption can be satisfied.” and [0074], “A location is determined to be associated with a stable depth region when the depth value associated with the location is within a threshold distance of the depth reference model, the threshold distance determined based on the depth error model. At 610b, generating the reduced representations comprises performing a Boolean subtraction of one or more stable depth regions from the sequence of depth images.”)
normalizing the distance to fit into a domain of the expansion. (Fu, [0018], “In various embodiments, the computing device may then normalize the depth values of the pixels of the depth image utilizing bilateral filtering.” and [0051], “Since the depth is the distance between the object and the baseline instead of depth camera 208, the upper assumption can be satisfied.”)
31. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 7 of Persson in view of Gao, further in view of Hansson Soederlund, and further in view of Liu to include the disclosure of extracting a coherent 3D representation from images collected with a depth camera, collecting a distance to at least one object, and normalizing the distance to fit into a domain of the expansion, of Fu. The motivation for this modification could have been to use the depth camera to reconstruct the 3D representation of an object. In addition, knowing the distance between the depth camera and the object can assist with properly adjusting the volumetric space to fully represent the object.
Conclusion
32. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW CLOTHIER whose telephone number is (571)272-4667. The examiner can normally be reached Mon-Fri 8:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW CLOTHIER/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614