Prosecution Insights
Last updated: April 19, 2026
Application No. 18/658,623

METHOD FOR GENERATING A 3D MODEL HAVING INNER STRUCTURES

Non-Final OA §103§112
Filed
May 08, 2024
Examiner
DANG, PHILIP
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Hyperforge Holdings Pte. Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
363 granted / 470 resolved
+19.2% vs TC avg
Strong +33% interview lift
Without
With
+33.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
49 currently pending
Career history
519
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 470 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS), submitted on 5/17/2024, is being considered by the examiner. Objections Claim 1 is objected. The claim limitation “the portion” should be read “the predetermined portion”. An appropriate correction is required. The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, “a 3D printer”, “material properties” must be shown or the feature(s) must be canceled from the claims 1-17. No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate change made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the change are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) ELEMENT IN CLAIM FOR A COMBINATION.—An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”), such as “means of a number of voxels”, “means of a calculated structure”, “means of at least one logical operator”, are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejection – 35 U.S.C. § 112 The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 1 recites "the properties". There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 1 and its dependent claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 1 recites "the structure". It is noted that claim 1 previously recites “a first tree structure” and “a structure”. Hence, it is not clear that which structure that "the structure" that may refer to. Therefore, claim 1 and its dependent claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claims 5-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 5 recites "the group". There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 5 and its dependent claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 7 recites "the attribute values". There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. It is noted that claim 1 previously recites "an attribute value". Hence, an amendment with "the attribute value" will address the issue. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1 - 17 are rejected under 35 U.S.C. 103 as being unpatentable over Young (US Patent 9,600,929 B1), (“Young”), in view of Inziello et al. (US Patent 11,155,041 B2), (“Inziello”). Regarding claim 1, Young meets the claim limitations as follow. A method (a computer-implemented method) [Young: col. 4, line 45-46] for generating a digital model (generate a 3D voxel model) [Young: col. 3, line 41; Fig. 2] of a 3D object (volume elements (i.e., "voxels") are the base data used to represent 3D objects) [Young: col. 1, line 38-40; Fig. 1, 5A-5B] for subsequent printing by means of a 3D printer (a 3D printer readable file is loaded into a 3D printer to 3D-print a physical representation of the content of the model) [Young: col. 3, line 45-47], wherein the 3D object has an inner volume (I) ((3D models represent an internal structure of an object) [Young: col. 5, line 39-41]; (volume elements (i.e., "voxels") are the base data used to represent 3D objects) [Young: col. 1, line 38-40; Fig. 1, 5A-5B]), wherein the inner volume (I) is bounded by a surface (O) (volume models can contain all the surface and internal characteristics of a real object) [Young: col. 3, line 52-54], and wherein - a first voxel model (VM) is generated (generate a 3D voxel model) [Young: col. 3, line 40-41], wherein - the first voxel model (VM) (a 3D voxel model) [Young: col. 3, line 40-41] represents the 3D object by means of a number of voxels (VX) (As used herein, a "3D voxel model," a "set of 3D voxel data" and a "3D voxel data set" are used synonymously and interchangeably to refer to a group of voxels that collectively model a discrete object) [Young: col. 6, line 3-6; Figs. 1, 5B], - a first number of voxels represent the inner volume (I) and a second number of voxels represent the surface (O) (Briefly, NGRAIN® technology permits 3D modeling of an object, wherein each of multiple parts or layers forming the object is represented as a voxel set each consisting of one or more voxels. According to NGRAIN® technology, it is possible to render a large number of voxel sets representing multiple parts or layers together, while allowing a user to manipulate each part or layer separately in 3D space. For example, the user may break up the parts or layers to display an exploded view of an object, or may peel off an outer layer of the object to reveal its inner layer) [Young: col. 2, line 1-10] – Note: Young discloses that his 3D model has an inner layer (i.e. inner volume) and an outer layer (i.e. the surface). Each layer is represented by multiple of voxels), and - the number of voxels (VX) (a large number of voxel sets representing multiple parts or layers) [Young: col. 2, line 5-6] are stored (Typically the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner) [Young: col. 1, line 23-26] in a first tree structure (An octree=(2x2x2=8) is a tree data structure in which each internal node has eight children nodes, while a 64-tree=( 4x4x4=64) is a tree data structure in which each internal node has sixty-four children nodes) [Young: col. 3, line 57-60], and - a structure (S) is defined for a predetermined portion (A) of the inner volume (I) (An octree=(2x2x2=8) is a tree data structure in which each internal node has eight children nodes, while a 64-tree=( 4x4x4=64) is a tree data structure in which each internal node has sixty-four children nodes.) [Young: col. 3, line 57-60], wherein a property is assigned to a number of volume regions of the portion (A) by means of the structure (S) (wherein each of multiple parts or layers forming the object is represented as a voxel set each consisting of one or more voxels . According to NGRAIN® technology, it is possible to render a large number of voxel sets representing multiple parts or layers together, while allowing a user to manipulate each part or layer separately in 3D space. For example, the user may break up the parts or layers to display an exploded view of an object, or may peel off an outer layer of the object to reveal its inner layer) [Young: col. 2, line 2-4; col. 3, line 57-60] – Note: The parts or layers in Young’s invention discloses the property of the application), - wherein the properties assigned to the volume regions of the portion (A) are stored ((wherein each of multiple parts or layers forming the object is represented as a voxel set each consisting of one or more voxels) [Young: col. 2, line 2-4]; (Typically the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner) [Young: col. 1, line 23-26]) as an attribute value of the voxels corresponding to the volume regions of the portion (A) (attributes information such as an RGB color value, a normal vector (which indicates a direction of illumination to define shine), an intensity value (which defines brightness and darkness), weight, density, temperature, etc. of each occupied voxel) [Young: col. 11, line 43-47], in the first tree structure (An octree=(2x2x2=8) is a tree data structure in which each internal node has eight children nodes) [Young: col. 3, line 57-58]. Young does not explicitly disclose the following claim limitations (Emphasis added). wherein the inner volume (I) is bounded by a surface (O). However, in the same field of endeavor Inziello further discloses the deficient claim limitations and the claim limitation as follows: wherein the inner volume (I) is bounded by a surface (O) (a plurality of subsurface voxels beneath the surface of the model) [Inziello: col. 2, line 61-62] – Note: Please see the inner volumes are bounded under the surface normal in Fig. 8). wherein a property is assigned to a number of volume regions of the portion ( a method of generating optical properties for a voxel data structure via texture mapping and by using a combination of texture types (such as albedo maps, roughness maps, metalness maps, subsurface luminance maps, and other similar textures) to accurately reflect desired texture characteristics in a manufactured object) [Inziello: col. 1, line 21-26]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 2, Young meets the claim limitations as set forth in claim 2. Young further meets the claim limitations as follow. calculating the structure (S) according to a calculation rule ((In some embodiments, a similarity value may be calculated as the ratio of a number voxels that are the same between the two sets of 3D voxel data relative to a total number of voxels included in one of the two sets) [Young: col. 6, line 63-66; Fig. 1] - Note: The calculation rule in this case is the similarity). In the same field of endeavor, Inziello also discloses the claim limitations as follows. calculating the structure (S) according to a calculation rule (The method may include a step of calculating a subsurface scattering map for each of the plurality of surface voxels and for each of the plurality of subsurface voxels. Moreover, the step of projecting the calculated texture map from the surface of the three-dimensional model to the center point of the model may include a step of assigning texture percentages to each of the plurality of subsurface voxels to create the texture gradient. The selected at least one texture and the selected at least one material may be calibrated for at least one of the plurality of voxels by comparing the selected at least one texture and the selected at least one material to the virtual model) [Inziello: col. 3, line 29-40]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 3, Young meets the claim limitations as set forth in claim 1. Young further meets the claim limitations as follow. wherein a plurality of calculation rules is selected such that, with each calculation rule, the property is assigned to a volume region of the number of volume regions of the portion (A) (A dissimilarity value between the total normal (2, 4, 6) of the first set and the total normal (3, 3, 4) of the second set can be calculated as the Euclidian distance between the total normals, i.e., D^2=(2-3)^2+(4-3)^2+(6-4)^2=1+1+4=6.) [Young: col. 17, line 22-26; Fig. 6D] – Note: The property in this case is the Euclidian distance). In the same field of endeavor, Inziello also discloses the claim limitations as follows. wherein a plurality of calculation rules is selected such that, with each calculation rule, the property is assigned to a volume region of the number of volume regions of the portion (A) (The generated voxel data structures are produced during the voxel slicing process, where the volumetric pixels are defined by their proximities to a pixel on the mapped texture and their distance from the surface normal of the 3D mesh) [Inziello: col. 5, line 47-50]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 4, Young meets the claim limitations as set forth in claim 3. Young further meets the claim limitations as follow. wherein the volume regions are disjoint volume regions, each of which is assigned the property by means of a calculated structure (S), wherein the structure is calculated for at least two volume regions according to different calculation rules (A dissimilarity value between the total normal (2, 4, 6) of the first set and the total normal (3, 3, 4) of the second set can be calculated as the Euclidian distance between the total normals, i.e., 25 ff2=(2-3)'2+(4-3)'2+(6-4)'2=1+1+4=6.) [Young: col. 17, line 22-26; Fig. 6D] – Note: The property in this case is the Euclidian distance). In the same field of endeavor, Inziello also discloses the claim limitations as follows. wherein the volume regions are disjoint volume regions, each of which is assigned the property by means of a calculated structure (S) (FIG. 8 depicts a voxel location determination process in accordance with an embodiment of the present invention. As noted in FIG. 3B above, a cross-section of a voxel model includes a volume defined by a surface normal and a central point of the model. The surface normal position of the voxel model is assigned a value of 0, and the central point of the model is assigned a value of 1, such that scale of values from the surface normal to the central point of the model is calculated from 0 to 1. The voxel locations (also referred to as voxel addresses) allow for small scale, intricate customization of the voxels based on the selected position. For example, as shown in FIG. 8, a subsurface scattering texture map is located at a surface normal position of0.0. Addressable control texture maps at position 0.2 and position 0.35 are also shown, including a waveguide used to illuminate the object.) [Inziello: Fig. 8] - Note: Fig. 8 shows several volume regions, which are disjoint volume regions, by different scale of values from the surface normal to the central point of the model), wherein the structure is calculated for at least two volume regions according to different calculation rules (Finally, the bottom-right portion of FIG. 6 depicts a color map for the subsurface of the post-luminance model, with the color map showing differences in the color at the surface of the model as compared with the subsurface of the model. The differences in color applied to the subsurface of the model, in accordance with the methods described above, contribute to the increased accuracy of the models generated through the methods described herein) [Inziello: col. 7, line 44-52]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 5, Young meets the claim limitations as set forth in claim 1. Young further meets the claim limitations as follow. wherein the property (the differences may be) [Young: col. 18, line 2; Figs. 6A-6D] is selected from the group at least comprising (to determine any differences therebetween ) [Young: col. 3, line 37]:- the volume regions of the portion (A) are volume regions that do not belong to the 3D object ((the octagon object occupies each of 32 voxels 31C but does not occupy four corner voxels 31 CC in the first 3 D voxel model) [Young: col. 9, line 27-29; Figs. 6A-6D]; (A voxel may be in A but not B, representing a voxel that has been removed in B relative to A. A voxel may be in B but not A, representing a voxel that has been added in B relative to A) [Young: col. 14, line 47 – 47]), so that the volume regions of the portion (A) that belong to the object (On the other hand, the circle object 32 occupies each of all 36 voxels 33C in the second 3D voxel model) [Young: col. 9, line 29-31] form a lattice-like structure (in which a mesh of polygons is used to represent only the surfaces of a 3D object) [Young: col. 1, line 55-56],- material properties (Color diffing may be advantageous, for example, when change in the expected color might indicate material deterioration, rusting, etc., of an object. In such case, color diffing may be used to provide advance warning signs for potential "problem" areas of the object.) [Young: col. 15, line 24-29],- the material to be used for 3D printing (the 3D-printed material) [Young: col. 18, line 25], and- combinations thereof (Alternatively, as in the color diffing operation described above, the normal diffing operation may compare a combination of the two normals) [Young: col. 17, line 13-15]. In the same field of endeavor, Inziello also discloses the claim limitations as follows. - the volume regions of the portion (A) that belong to the object form a lattice-like structure (FIG. 1, which depicts a surface of a voxel mesh colored in RGB and converted to a CMYK and white mixture to texture the surface of the mesh, with white material used on underlying voxels) [Inziello: col. 1, line 51-55],- material properties (a variety of materials, including plastic (also referred to as an insulator component); pure metal; and transition or otherwise partial metal components) [Inziello: col. 7, line 62-64],- the material to be used for 3D printing (Turning now to FIGS. 7A-7B, the methods of the present invention can be used to map metalness on a model to thereby more accurately depict variations in metallic qualities in a model or a final 3D printed object. By providing accurate renderings of the metallic characteristics of an object, the methods described herein can provide an accurate texture map of a model of the object, and ultimately an accurate texture on a printed object. Accordingly, as shown in FIG. 7A, an initial model (shown on the left-side of FIG. 7A) includes a variety of materials, including plastic (also referred to as an insulator component); pure metal; and transition or otherwise partial metal components) [Inziello: col. 7, line 53-64; Fis. 7A-7B], and- combinations thereof (The invention accordingly comprises the features of construction, combination of elements, and arrangement of parts that will be exemplified in the disclosure set forth hereinafter and the scope of the invention will be indicated in the claims) [Inziello: col. 4, line 1-4]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 6, Young meets the claim limitations as set forth in claim 5. Young further meets the claim limitations as follow. wherein the volume regions of the portion (A) are volume regions that do not belong to the 3D object ((the octagon object occupies each of 32 voxels 31C but does not occupy four corner voxels 31 CC in the first 3 D voxel model) [Young: col. 9, line 27-29; Figs. 6A-6D] ; (A voxel may be in A but not B, representing a voxel that has been removed in B relative to A. A voxel may be in B but not A, representing a voxel that has been added in B relative to A) [Young: col. 14, line 47 – 47]), the volume regions of the portion (A) that belong to the object have a common boundary surface (On the other hand, the circle object 32 occupies each of all 36 voxels 33C in the second 3D voxel model) [Young: col. 9, line 29-31], wherein a distance (d) of the voxels to the boundary surface is stored (Typically the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner) [Young: col. 1, line 23-26] as a property in the respective voxel (A dissimilarity value between the total normal (2, 4, 6) of the first set and the total normal (3, 3, 4) of the second set can be calculated as the Euclidian distance between the total normals, i.e., D^2=(2-3)^2+(4-3)^2+(6-4)^2=1+1+4=6.) [Young: col. 17, line 22-26; Fig. 6D] – Note: The property in this case is the Euclidian distance). In the same field of endeavor, Inziello also discloses the distance as follows. wherein a distance (d) of the voxels to the boundary surface is stored as a property in the respective voxel ((The generated voxel data structures are produced during the voxel slicing process, where the volumetric pixels are defined by their proximities to a pixel on the mapped texture and their distance from the surface normal of the 3D mesh) [Inziello: col. 5, line 47-50]; (As noted in FIG. 3B above, a cross-section of a voxel model includes a volume defined by a surface normal and a central point of the model. The surface normal position of the voxel model is assigned a value of 0, and the central point of the model is assigned a value of 1, such that scale of values from the surface normal to the central point of the model is calculated from 0 to 1. The voxel locations (also referred to as voxel addresses) allow for small scale, intricate customization of the voxels based on the selected position. For example, as shown in FIG. 8, a subsurface scattering texture map is located at a surface normal position of0.0. Addressable control texture maps at position 0.2 and position 0.35 are also shown, including a waveguide used to illuminate the object.) [Inziello: Fig. 3B, 8]). It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 7, Young meets the claim limitations as set forth in claim 1. Young further meets the claim limitations as follow. wherein control instructions (computer-executable instructions embodying a diffing algorithm may be loaded onto the computing device 70 (or the memory 72). The I/O interface 74 includes the necessary circuitry for such a connection and is constructed for use with the necessary protocols, as will be appreciated by those skilled in the art) [Young: col. 18, line 51-56; Figs. 7] for the 3D printer ((the object at which the physical representation printed by the 3D printer) [Young: col. 22, line 55-56]; (a 3D printer readable file is loaded into a 3D printer to 3D-print a physical representation of the content of the model) [Young: col. 3, line 45-47]) are derived from the attribute values (wherein the attribute value is selected from a group consisting of a color value, a normal value, and an intensity value) [Young: col. 21, line 34-36], stored in the first tree structure (Typically the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner) [Young: col. 1, line 23-26], of the voxels corresponding to the volume regions of the portion (A) (An octree=(2x2x2=8) is a tree data structure in which each internal node has eight children nodes, while a 64-tree=( 4x4x4=64) is a tree data structure in which each internal node has sixty-four children nodes) [Young: col. 3, line 57-60]. Regarding claim 8, Young meets the claim limitations as set forth in claim 1. Young further meets the claim limitations as follow. wherein control instructions (computer-executable instructions embodying a diffing algorithm may be loaded onto the computing device 70 (or the memory 72). The I/O interface 74 includes the necessary circuitry for such a connection and is constructed for use with the necessary protocols, as will be appreciated by those skilled in the art) [Young: col. 18, line 51-56; Figs. 7] for the 3D printer ((the object at which the physical representation printed by the 3D printer) [Young: col. 22, line 55-56]; (a 3D printer readable file is loaded into a 3D printer to 3D-print a physical representation of the content of the model) [Young: col. 3, line 45-47]) are derived from the attribute values (wherein the attribute value is selected from a group consisting of a color value, a normal value, and an intensity value) [Young: col. 21, line 34-36], stored (Typically the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner) [Young: col. 1, line 23-26] in the first tree structure of the voxels corresponding to the volume regions of the portion (A) (An octree=(2x2x2=8) is a tree data structure in which each internal node has eight children nodes, while a 64-tree=( 4x4x4=64) is a tree data structure in which each internal node has sixty-four children nodes) [Young: col. 3, line 57-60]. Regarding claim 9, Young meets the claim limitations as set forth in claim 8. Young further meets the claim limitations as follow. wherein combining the first voxel model and the second voxel model comprises merging by means of at least one logical operator ((The user may then visually observe the differences, may choose one 3D voxel model out of plural 3D voxel models based on evaluation of the differences, or may combine the 3D voxel models to create one optimal 3D voxel model by, for example, taking an average of the differences (averaging out the differences)) [Young: col. 8, line 41-46]. Regarding claim 10, Young meets the claim limitations as set forth in claim 8. Young further meets the claim limitations as follow. wherein the resultant voxel model is the first voxel model or the second voxel model (The user may then visually observe the differences, may choose one 3D voxel model out of plural 3D voxel models based on evaluation of the differences, or may combine the 3D voxel models to create one optimal 3D voxel model) [Young: col. 8, line 41-45]. Regarding claim 11, Young meets the claim limitations as set forth in claim 8. Young further meets the claim limitations as follow. wherein a first spatial reference point is defined for the voxels of the first voxel model, and wherein a second spatial reference point is defined for the voxels of the second voxel model (In either case, the location of each point scanned is represented as a polar coordinate since the angle between the scanner and the object and distance from the scanner to the object are known. Typically the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner) [Young: col. 1, line 13-26], wherein the relative position of the two spatial reference points to one another is included in the combination of the two voxel models (The user may then visually observe the differences, may choose one 3D voxel model out of plural 3D voxel models based on evaluation of the differences, or may combine the 3D voxel models to create one optimal 3D voxel model) [Young: col. 8, line 41-45]. In the same field of endeavor, Inziello also discloses the distance as follows. wherein the relative position of the two spatial reference points to one another is included in the combination of the two voxel models (To produce the desired optical properties under the methods represented in subsections (b )-( d), the methods described herein include a step of building a voxel counterpart to the model, thereby creating sections of the model of equal size, shape, and area. The voxel counterpart may be referred to as a mesh, and during the voxelization of the mesh, the method includes a step of measuring the mesh's texture set, roughness map, color map, subsurface scattering, transparency map, and other useful metrics to determine accurate optical properties. The method then builds a projection of voxel cells from the surface of the mesh to a center point of the model, thereby creating a subsurface spectrum of optical properties. The percentage and type of materials deposited in each voxel cell of the model are calibrated to approximate the same light transmission found in a real-time PBR renderer. As such, the methods of the present invention result in highly accurate 3D printed objects after the slices of the mesh and model are rendered as .png images and transmitted to a 3D printer for the creation of a physical object) [Inziello: col. 6, line 6-26].It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Young with Inziello to program the system to implement of Inziello’s method. Therefore, the combination of Young with Inziello will enable the system to improve the accuracy of 3D printed objects [Inziello: col. 3, line 61-64]. Regarding claim 12, Young meets the claim limitations as set forth in claim 8. Young further meets the claim limitations as follow. wherein the first voxel model has a first spatial resolution (1/8 voxel resolution) [Young: Fig. 5B] and the second voxel model has a second spatial resolution (1/4 voxel resolution) [Young: Fig. 5B], wherein the first spatial resolution is different from the second spatial resolution (FIG. 5B illustrates a process of converting a first set of 3D voxel data 55A having a higher LOD to a modified first set of 3D voxel data 55B having a lower LOD that matches the LOD of a second set of 3D voxel data 56. In the illustrated example, the LOD of the first set of 3D voxel data 55A is 1/8 (inch/voxel) and the LOD of the second set of 3D voxel data 56 is 1/4 (inch/voxel). The first set of 3D voxel data 55A having LOD of 1/8 is converted down to the modified first set of 3D voxel data 55B having LOD of 1/4, which is the same as the LOD of the second set of 3D voxel data 56) [Young: col. 12, line 8-17; Fig. 5B]. Regarding claim 13, Young meets the claim limitations as set forth in claim 12. Young further meets the claim limitations as follow. wherein the second voxel model has a third spatial resolution in a volume corresponding to the predetermined portion of the inner volume (I) (In typical applications according to various embodiments of the present invention, 3D voxel models can be created to reflect a LOD of 1/2-1/10 (inch/voxel) resolution. For example, an object that is 5 inches long may be mapped to voxels along a 3D voxel axis, at the LOD of 5/25=1/5 (inch/voxel)) [Young: col. 10, line 4-9; Fig. 5B]. Regarding claim 14, Young meets the claim limitations as set forth in claim 2. Young further meets the claim limitations as follow. wherein the structures (S) are calculated at least partially in parallel (According to various embodiments of the present invention an occupied or empty state of each voxel is indicated by 1 bit ( e.g., 1 for an occupied voxel and 0 for an empty voxel) and the two sets of 3D voxel data are arranged in a 64-tree structure. Corresponding to the use of the 64-tree structure, according to various embodiments of the present invention, a 64-bit architecture processor is used to rapidly make a one-to-one comparison of bits indicative of occupied or empty states of voxels arranged in the 64-tree data structure. Briefly, a 64-bit architecture processor, which has become prevalent in recent years, has an arithmetic logical unit (ALU) data width of 64 bits and also a memory address width of 64 bits. The 64-bit ALU has a bus size of 64 bits such that it can fetch 64 bits of data at a time and can add/subtract/multiply two 64-bit numbers in one instruction) [Young: col. 14, line 19-33; col. 4, line 8-17] – Note: Young describes that the 64-bit processor can compare two sets of data arranged in two 64-tree structures. Hence, a parallel of 64 one-to-one bit comparisons can be done in a single operation). Regarding claim 15, Young meets the claim limitations as set forth in claim 8. Young further meets the claim limitations as follow. wherein several second voxel models are generated for several structures ((In general, voxel models are rarely the starting point for diffing, but by first converting non-voxel models to voxel models as needed, one could perform rapid diffing of any two types of 3D models, such as a polygon model vs. a point cloud model, between two polygon models, between two point cloud models, etc., according to various embodiments of the present invention. As used herein, a "3D voxel model," a "set of 3D voxel data" and a "3D voxel data set" are used synonymously and interchangeably to refer to a group of voxels that collectively model a discrete object) [Young: col. 5, line 63 – col. 6, line 6]; (When multiple 3D voxel models are created to represent a single object, differencing may be performed to identify any differences that may exist among the multiple 3D voxel models. As a specific example, a 3D voxel model of an object may be created by 3D scanning the object to generate a point cloud model and transforming the point cloud model to the 3D voxel model. Another 3D voxel model of the same object may be created by transforming a polygon-mesh model, a spline-based model, or a CAD model of the object to the 3D voxel model. Though these 3D voxel models represent the same object and are supposed to be identical, discrepancy may be introduced due to imperfection in the original modeling or in methods used to transform the original models to the 3D voxel models. Even when the same 3D scanner is used to scan the same object multiple times, resulting point cloud models may be different depending on a particular arrangement used in each scanning session such as the orientation of the object relative to the 3D scanner. a user. The user may then visually observe the differences, may choose one 3D voxel model out of plural 3D voxel models based on evaluation of the differences, or may combine the 3D voxel models to create one optimal 3D voxel model by, for example, taking an average of the differences (averaging out the differences) [Young: col. 8, line 18 – 46]), wherein several second voxel models are combined with the first voxel model in parallel (According to various embodiments of the present invention an occupied or empty state of each voxel is indicated by 1 bit ( e.g., 1 for an occupied voxel and 0 for an empty voxel) and the two sets of 3D voxel data are arranged in a 64-tree structure. Corresponding to the use of the 64-tree structure, according to various embodiments of the present invention, a 64-bit architecture processor is used to rapidly make a one-to-one comparison of bits indicative of occupied or empty states of voxels arranged in the 64-tree data structure. Briefly, a 64-bit architecture processor, which has become prevalent in recent years, has an arithmetic logical unit (ALU) data width of 64 bits and also a memory address width of 64 bits. The 64-bit ALU has a bus size of 64 bits such that it can fetch 64 bits of data at a time and can add/subtract/multiply two 64-bit numbers in one instruction) [Young: col. 14, line 19-33; col. 4, line 8-17] – Note: Young describes that the 64-bit processor can compare two sets of data arranged in two 64-tree structures. Hence, a parallel of 64 one-to-one bit comparisons can be done in a single operation). Regarding claim 16, Young meets the claim limitations as set forth in claim 8. Young further meets the claim limitations as follow. wherein the second voxel model is combined with the first voxel model multiple times (Such visualization and quantification of differences between two sets of3D voxel data resulting from diffing the two sets are highly useful in various applications and settings. For example, when multiple 3D voxel models are created to represent a single object, differencing may be performed to identify any differences that may exist among the multiple 3D voxel models. As a specific example, a 3D voxel model of an object may be created by 3D scanning the object to generate a point cloud model and transforming the point cloud model to the 3D voxel model. Another 3D voxel model of the same object may be created by transforming a polygon-mesh model, a spline-based model, or a CAD model of the object to the 3D voxel model. Though these 3D voxel models represent the same object and are supposed to be identical, discrepancy may be introduced due to imperfection in the original modeling or in methods used to transform the original models to the 3D voxel models. Even when the same 3D scanner is used to scan the same object multiple times, resulting point cloud models may be different depending on a particular arrangement used in each scanning session such as the orientation of the object relative to the 3D scanner. a user. The user may then visually observe the differences, may choose one 3D voxel model out of plural 3D voxel models based on evaluation of the differences, or may combine the 3D voxel models to create one optimal 3D voxel model by, for example, taking an average of the differences (averaging out the differences) [Young: col. 8, line 15 – 46], preferably in parallel (According to various embodiments of the present invention an occupied or empty state of each voxel is indicated by 1 bit ( e.g., 1 for an occupied voxel and 0 for an empty voxel) and the two sets of 3D voxel data are arranged in a 64-tree structure. Corresponding to the use of the 64-tree structure, according to various embodiments of the present invention, a 64-bit architecture processor is used to rapidly make a one-to-one comparison of bits indicative of occupied or empty states of voxels arranged in the 64-tree data structure. Briefly, a 64-bit architecture processor, which has become prevalent in recent years, has an arithmetic logical unit (ALU) data width of 64 bits and also a memory address width of 64 bits. The 64-bit ALU has a bus size of 64 bits such that it can fetch 64 bits of data at a time and can add/subtract/multiply two 64-bit numbers in one instruction) [Young: col. 14, line 19-33; col. 4, line 8-17] – Note: Young describes that the 64-bit processor can compare two sets of data arranged in two 64-tree structures. Hence, a parallel of 64 one-to-one bit comparisons can be done in a single operation). Regarding claim 17, Young meets the claim limitations as set forth in claim 2. Young further meets the claim limitations as follow. wherein the calculation rule comprises at least one implicit function (When a routine compares the bits representing occupancy in two arbitrary models A and B, there are four ( 4) possibilities. A voxel may be in both A and B, representing similarity. A voxel may be in neither A nor B, representing similarity in empty space. A voxel may be in A but not B, representing a voxel that has been removed in B relative to A. A voxel may be in B but not A, representing a voxel that has been added in B relative to A. Note that the latter two cases can be performed by the single fast XOR 'exclusive or' function. The former two cases can be performed by the single fast equality function. Each of the four cases may be important to detect and identify different information) [Young: col. 14, line 40 – 51]. Reference Notice Additional prior arts, included in the Notice of Reference Cited, made of record and not relied upon is considered pertinent to applicant's disclosure. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip Dang whose telephone number is (408) 918-7529. The examiner can normally be reached on Monday-Thursday between 8:30 am - 5:00 pm (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000./Philip P. Dang/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

May 08, 2024
Application Filed
Nov 12, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602837
ON SUB-DIVISION OF MESH SEQUENCES
2y 5m to grant Granted Apr 14, 2026
Patent 12593116
IMAGING MEASUREMENT DEVICE USING GAS ABSORPTION IN THE MID-INFRARED BAND AND OPERATING METHOD OF IMAGING MEASUREMENT DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12581069
METHOD FOR ENCODING/DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12581106
IMAGE DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12574557
SCALABLE VIDEO CODING USING BASE-LAYER HINTS FOR ENHANCEMENT LAYER MOTION PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 470 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month