DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-5, 8, 11, 13, 16-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US patent publication: 20200209009, “Zhang”) in view of Leist et al. (US patent Publication: 20210378748, “Leist”).
Regarding claim 13, Zhang teaches, A system (“[0101] According to another embodiment, a computer system may include a memory, and a hardware processor in communication with the memory and configured with processor-executable instructions to perform specific operations”) comprising:
one or more hardware processors (Zhang, “[0101] According to another embodiment, a computer system may include a memory, and a hardware processor in communication with the memory and configured with processor-executable instructions to perform specific operations.”)configured to:
receive a three-dimensional ("3D") image comprising a plurality of 3D image data that is distributed across a 3D space; ([0101] “…The operations may include obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging (LiDAR) scans of a geographic area,”)
detect an edit that is applied to at least one 3D image data instance from the plurality of 3D image data; (Zhang, “[0101]….“wherein the user interface includes display of the 3D rendering, and displaying, within the user interface, a plurality of suggested commands for altering positioning of the first set of point cloud data in 3D virtual space in order to better match at least the first subset of points with the second subset of points.”)
determine a set of 3D image data from the plurality of 3D image data that is a threshold distance from the at least one 3D image data instance; (Zhang [0085] “….For example, the system may identify two or more point clouds that have edges that are misaligned by less than a threshold distance (e.g., 0.1, 0.5, 1, etc. in the x axis, y axis, and/or z axis) and/or threshold angle (e.g., 1°, 5°, 10°, etc. in the x axis, y axis, and/or z axis). Here one of the point clouds is claimed second set of points which is within a threshold distance from another point cloud which is the first set of 3d points and second set of points are within first set 3d points.);
Zhang doesn’t expressly teach, adjust one or more visual characteristics of the at least one 3D image data instance according to the one or more visual characteristics of each 3D image data instance in the set of 3D image data and a distance between each 3D image data instance in the set of 3D image data and the at least one 3D image data instance.
However, Leist teaches, adjust one or more visual characteristics of the at least one first image data instance according to the one or more visual characteristics of image data instance in the second image data and a distance between image data instance in the second image data and the at least one first image data instance.
(“[0060] In certain examples, display facility 304 may be configured to determine a display parameter such as a color for a pixel of an image based on a defined color blending function. For example, using the color blending function and based on a determined distance between a point on surface anatomy 404 and a point on embedded anatomy 406, display facility 304 may blend a color associated with the point on embedded anatomy 406 with a color associated with the point on surface anatomy 404 to determine the color for the pixel of the image. The color blending function may be defined to give the color associated with the point on embedded anatomy 406 more weight when the determined distance is relatively shorter and less weight when the determined distance is relatively longer. Thus, for a relatively shorter distance, the color associated with the point on embedded anatomy 406 may be emphasized more in the determined color for the pixel of the image than for a relatively longer distance. To this end, the color blending function may specify how the weight given to the color associated with the point on embedded anatomy 406 changes for different values of the determined distance.” Claimed visual characteristic is the color. ).
Leist and Zhang are analogous as they are from the field of image processing.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Zhang to have included adjusting one or more visual characteristics of the at least one 3D image data instance according to the one or more visual characteristics of each 3D image data instance in the set of 3D image data and a distance between each 3D image data instance in the set of 3D image data and the at least one 3D image data instance similar to adjusting one or more visual characteristics of one first image data instance according to the one or more visual characteristics of image data instance in the second image data and a distance between image data instance in the second image data and the at least one first image data instance as taught by Leist.
The motivation to include the modification is to bring the property of second set of points to a first set of points for making them similar.
Claim 1 is directed to a method and its steps are similar in scope and functions of the elements of the device claim 13 and therefore claim 1 is rejected with same rationales as specified in the rejection of claim 13.
Claim 20 is directed to a non-transitory computer-readable medium (Zhang, “[0104] According to another embodiment, a non-transitory computer readable medium stores computer executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform specific operations.”) and its elements are similar in scope and functions of the elements of the device claim 13 and therefore claim 20 is rejected with same rationales as specified in the rejection of claim 13.
Regarding claims 4 and 16, Zhang as modified BY Leist teaches, decreasing an impact that the one or more visual characteristics of each particular 3D image data instance in the set of 3D image data have on the one or more visual characteristics of the at least one 3D image data instance as the distance between the particular 3D image data instance in the set of 3D image data and the at least one 3D image data instance increases. (Leist “[0060]…..”The color blending function may be defined to give the color associated with the point on embedded anatomy 406 more weight when the determined distance is relatively shorter and less weight when the determined distance is relatively longer.”)
Regarding claims 5 and 17, Zhang as modified BY Leist teaches, determining that a first 3D image data instance in the set of 3D image data is a first distance away from the at least one 3D image data instance; (Zhang, . “[0082]….This distance may be measured by the computing system (either the map editor device 202 or the server 130) using (x, y, z) coordinates of each point in 3D virtual space.”)
determining that a second 3D image data instance in the set of 3D image data is a second distance away from the at least one 3D image data instance; (Zhang, . “[0082]….This distance may be measured by the computing system (either the map editor device 202 or the server 130) using (x, y, z) coordinates of each point in 3D virtual space.”) and
computing a first amount by which the one or more visual characteristics of the first 3D image data instance adjust the one or more visual characteristics of the at least one 3D image data instance, and a lesser second amount by which the one or more visual characteristics of the second 3D image data instance adjust the one or more visual characteristics of the at least one 3D image data instance based on the first distance being less than the second distance. (Leist “[0060] In certain examples, display facility 304 may be configured to determine a display parameter such as a color for a pixel of an image based on a defined color blending function. For example, using the color blending function and based on a determined distance between a point on surface anatomy 404 and a point on embedded anatomy 406, display facility 304 may blend a color associated with the point on embedded anatomy 406 with a color associated with the point on surface anatomy 404 to determine the color for the pixel of the image. The color blending function may be defined to give the color associated with the point on embedded anatomy 406 more weight when the determined distance is relatively shorter and less weight when the determined distance is relatively longer. Thus, for a relatively shorter distance, the color associated with the point on embedded anatomy 406 may be emphasized more in the determined color for the pixel of the image than for a relatively longer distance. To this end, the color blending function may specify how the weight given to the color associated with the point on embedded anatomy 406 changes for different values of the determined distance.”)
Regarding claim 8, Zhang as modified by Leist teaches, computing the distance between a particular 3D image data instance in the set of 3D image data and the at least one 3D image data instance based on a difference in X, y, and Z coordinate positions defined for the particular 3D image data instance and X, y, and Z coordinate positions defined for the at least one 3D image data instance (Zhang, . “[0082]….This distance may be measured by the computing system (either the map editor device 202 or the server 130) using (x, y, z) coordinates of each point in 3D virtual space.”)
Regarding claim 11, Zhang as modified by Leist teaches, wherein the plurality of 3D image data comprises a plurality of points of a point cloud that are non-uniformly distributed across the 3D space. (Zhang, Fig.3 left window shows that the plurality of 3D image data comprises a plurality of points of a point cloud non-uniformly distributed across the 3D space.).
Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang as modified by Leist and further in view of Yokoyama (US patent publication: 20220334644, “Yokoyama”).
Regarding claim 12 Zhang as modified by Leist fails to expressly teach, wherein the plurality of 3D image data comprises a connected set of meshes or polygons that form a 3D object.
However, Yokoyama teaches, wherein a plurality of 3D image data comprises a connected set of meshes or polygons that form a 3D object. ( “[0162] Examples of an image data format of a 3D object include a first format in which two images for the left and right eyes and a depth image indicating a depth direction are transmitted, a second format in which a three-dimensional position of the object is expressed by a set of points (point cloud) and color information of the object is held corresponding to each point, and a third format in which a three-dimensional position of the object is expressed by connection between vertices called polygon mesh and color information of the object is held corresponding to each polygon mesh as a texture image of a UV coordinate system.”)
Yokoyama and Zhang as modified by Leist are analogous as they are from the field of image processing.
Therefore, it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Zhang as modified by Leist to have included the plurality of 3D image data to be connected set of meshes or polygons that form a 3D object as taught by Yokoyama.
The motivation to include the modification is to generate a 3d model of an object from point cloud.
Allowable Subject Matter
Claims 2-3, 6-7, 9-10, 14-15 and 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 2 and 14 are objected as the combination of the best available prior arts fails to expressly teach, determining a distance between the at least one 3D image data instance and a position in the 3D space from which the plurality of 3D image data is rendered; and
wherein adjusting the one or more visual characteristics comprises modifying an amount with which the one or more visual characteristics of the at least one 3D image data instance are adjusted based on the distance between the at least one 3D image data instance and the position in the 3D space.
Claims 3 and 15 are objected as the combination of the best available prior arts fails to expressly teach, determining a distance between the at least one 3D image data instance and a position in the 3D space from which the plurality of 3D image data is rendered; and decreasing an adjustment that is made to the one or more visual characteristics of the at least one 3D image data instance as the distance between the at least one 3D image data instance and the position increases.
Claims 6 and 18 are objected as the combination of the best available prior arts fails to expressly teach,, wherein adjusting the one or more visual characteristics of the at least one 3D image data instance comprises:
blending, by a first amount, the one or more visual characteristics of a first subset of the set of 3D image data with the one or more visual characteristics of the at least one 3D image data instance; and
blending, by a second amount, the one or more visual characteristics of a second subset of the set of 3D image data with the one or more visual characteristics of the at least one 3D image data instance, wherein the first subset of 3D image data is closer to the at least one 3D image data instance than the second subset of 3D image data.
Claim 9 is objected because the combination of the best available prior arts fails to expressly teach, computing an adjustment to apply to the one or more visual characteristics of the at least one 3D image data instance based on a degree with which a surface normal of each 3D image data instance from the set of 3D image data aligns with a surface normal of the at least one 3D image data instance.
Claim 10 is objected because the combination of the best available prior arts fails to expressly teach, wherein the distance between each 3D image data instance in the set of 3D image data and the at least one 3D image data instance changes an amount by which the one or more visual characteristics of that 3D image data instance affect the one or more visual characteristics of the at least one 3D image data instance.
Claims 7 and 19 are objected because the combination of the best available prior arts fails to expressly teach, wherein adjusting the one or more visual characteristics of the at least one 3D image data instance comprises:
smudging the one or more visual characteristics of the at least one 3D image data instance non-uniformly based on a lessening contribution from the one or more visual characteristics of 3D data instances in the set of 3D image data that are separated by a greater distance from the at least one 3D image data instance.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAPTARSHI MAZUMDER/ Primary Examiner, Art Unit 2612