DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending.
Claim Objections
Claim 16 objected to because of the following informalities:
Claim 16 recites “the metal of claim 12” which appears to contain a typographical error. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method, which fall within a statutory category.
Step 2A Prong one: claim 1 recites steps of “modifying a plurality of digital materials of a lighting-modified 3D digital model to generate a materials-modified 3D digital model based on a plurality of simulated characteristics”, “modifying one or more model parameters of the materials-modified 3D digital model to generate a parameters-modified 3D digital model”, “transforming an interaction between a modified digital lighting and a modified digital material in the parameters-modified 3D digital model to generate a transformed 3D digital model”. As is evident from the background, the claimed model generating simulations falls into the “mental process” group of abstract ideas, because the recited model generating simulations can be practically performed in the human mind. Note that even if most humans would use a physical aid (e.g., pen and paper, a slide rule, or a calculator) to help them complete the recited determination, the use of such physical aid does not negate the mental nature of this limitation. For example, such simulations steps of “modifying…..model to generate…. model, modifying…..model to generate…. model, transforming……. to generate…. model” includes modifying parameters of a 3D model to determine a modified 3D model based on a plurality of simulated characteristics simulations can be practically performed in the human mind with the use of physical aid likes pen and paper. If a claim limitation under its broadest reasonable interpretation covers performance of the limitation in the mind but for the recitation of generic computer components then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Step 2A Prong two: Besides the abstract ideas, the claim recites lack of any additional element that would integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
Step 2B: The claim as a whole does not amounts to significantly more than the recited exception. The claim is not eligible.
Claim 2 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Dependent Claim 2 recites additional element “rasterizing the transformed 3D digital model into a parameter space of a 3D model file to generate a rasterized 3D digital model”. This judicial exception is not integrated into a practical application because the additional elements is recited at a high-level of generality (i.e., as a generic computing system performing a generic function of displaying data) such that it amounts no more than mere instructions to apply the exception using a generic component and it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim as a whole does not amounts to significantly more than the recited exception, mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible.
Claim 17 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Dependent Claim 17 recites step of “an ambient occlusion parameter of the materials-modified 3D digital model is an occlusion integral”, the step cover performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation under its broadest reasonable interpretation covers performance of the limitation in the mind but for the recitation of generic computer components then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The claim lacks any additional elements which may serve to integrate it into a practical application or amount to significantly more than the abstract idea itself.
Claim 18 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Dependent Claim 18 recites step of “scaling the ambient occlusion parameter of the materials-modified 3D digital model to a lower non-zero value”, the step cover performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation under its broadest reasonable interpretation covers performance of the limitation in the mind but for the recitation of generic computer components then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The claim lacks any additional elements which may serve to integrate it into a practical application or amount to significantly more than the abstract idea itself.
Claim 19 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Dependent Claim 19 recites step of “modifying the plurality of digital materials of the lighting-modified 3D digital model is implemented by using a linear weighted superposition of one or more image parameters and one or more conjugate parameters”, the step cover performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation under its broadest reasonable interpretation covers performance of the limitation in the mind but for the recitation of generic computer components then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The claim lacks any additional elements which may serve to integrate it into a practical application or amount to significantly more than the abstract idea itself.
Claim 20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Dependent Claim 20 recites step of “the transformed 3D digital model is view-independent”, the step cover performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation under its broadest reasonable interpretation covers performance of the limitation in the mind but for the recitation of generic computer components then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. The claim lacks any additional elements which may serve to integrate it into a practical application or amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 17-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by DELGADO et al. (hereinafter “DELGADO”).
As to claim 1, DELGADO teaches a method for modifying a three-dimensional (3D) model, comprising:
modifying a plurality of digital materials of a lighting-modified 3D digital model [providing a unlit 3D model of the object by deleting or removing at least a portion of light] to generate a materials-modified 3D digital model based on a plurality of simulated characteristics [modifying texture map of the 3D model to include material models that define the materials of the object] [0104-01051, 0139-01402, 0145];
modifying one or more model parameters of the materials-modified 3D digital model to generate a parameters-modified 3D digital model [add an environment map to the 3D model for modification] [0108-0109, 0135, 01443]; and
transforming an interaction between a modified digital lighting and a modified digital material in the parameters-modified 3D digital model to generate a transformed 3D digital model [0146-01474].
As to claim 17, DELGADO teaches an ambient occlusion parameter of the materials-modified 3D digital model is an occlusion integral [0105-0108, 0127-0128].
As to claim 18, DELGADO teaches scaling the ambient occlusion parameter of the materials-modified 3D digital model to a lower non-zero value [0105-0108, 0127-0128].
As to claim 19, DELGADO teaches the modifying the plurality of digital materials of the lighting-modified 3D digital model is implemented by using a linear weighted superposition of one or more image parameters and one or more conjugate parameters [0104-0109, 0135-0140, 0144-0145].
As to claim 20, DELGADO teaches the transformed 3D digital model is view-independent [0055-0058, 0073-0074, 0147-0148].
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over DELGADO in view of Hillier et al. (hereinafter “Hillier”) (US 20230041333 A1).
As to claim 2, DELGADO teaches modifying the parameters of the 3D digital model to generate a transformed 3D digital model [0146-0147]. DELGADO does not explicitly teach rasterizing the transformed 3D digital model into a parameter space of a 3D model file to generate a rasterized 3D digital model.
However, Hillier teaches a method and system for generating a printed three-dimensional (3D) object includes converting a 3D print file representing the 3D object to at least one vector file representing the 3D object. Especially, Hillier teaches rasterizing the transformed 3D digital model into a parameter space of a 3D model file to generate a rasterized 3D digital model [Abstract, 0040, 0048-0051].
It would have been obvious to an ordinary person skilled in the art before the effective filing date of the invention to incorporate the teachings of Hillier with the teachings of DELGADO for the purpose of processing and converting a 3D print file to produce at least one rasterized vector file for 3D printing.
As to claim 3, Hillier teaches providing the rasterized 3D digital model to a 3D printer to create a 3D print [Abstract, 0040, 0048-0051].
As to claim 4, DELGADO teaches modifying a digital lighting of a three-dimensional (3D) digital model to generate the lighting-modified 3D digital model [0104-0109, 0135-0140, 0144-0145].
As to claim 5, DELGADO teaches configuring the lighting-modified 3D digital model to attenuate one or more directional light paths [0104-0109, 0125-0128, 0134-0140].
As to claim 6, DELGADO teaches attenuating the one or more directional light paths by dimming the digital lighting from a specular reflection [0104-0109, 0125-0128, 0134-0140].
As to claim 7, DELGADO teaches configuring the 3D digital model to use a physically based rendering (PBR) to store one or more image parameters for each point of a surface of the 3D digital model to represent one or more simulated characteristics [0105-0106, 0138-0147, 0157-0158].
As to claim 8, DELGADO teaches using the one or more image parameters as inputs to a shading calculation [0105-0106, 0117-0120].
As to claim 9, DELGADO teaches using the physically based rendering (PBR) to approximate a bidirectional reflectance distribution function (BRDF) and a rendering equation [0105-0106, 0138-0147, 0157-0158].
As to claim 10, DELGADO teaches the bidirectional reflectance distribution function (BRDF) describes one or more reflectance properties of the surface as a function of lighting geometry and observation geometry [0105-0106, 0138-0147, 0157-0158].
As to claim 11, DELGADO teaches the rendering equation defines a relationship between an incident illumination function and a reflected illumination function using the bidirectional reflectance distribution function (BRDF) [0105-0106, 0138-0147, 0157-0158].
As to claim 12, DELGADO teaches the one or more image parameters include at least one of the following: an albedo color metric, a roughness metric, a metalness metric, or a transparency metric [0102-0110, 0123-0128].
As to claim 13, DELGADO teaches the albedo color metric is a numeric representation of relative reflectance versus wavelength λ over a portion of an electromagnetic spectrum in a propagation medium for the plurality of digital materials [0102-0110, 0123-0128].
As to claim 14, DELGADO teaches the roughness metric is a numeric representation of a variation of a surface height relative to a reference surface for the plurality of digital materials [0102-0110, 0123-0128].
As to claim 15, DELGADO teaches the metalness metric is a numeric representation of metal proportion for the plurality of digital materials [0102-0110, 0123-0128].
As to claim 16, DELGADO teaches the transparency metric is a numeric representation of transmissivity through a surface of the plurality of digital materials [0102-0110, 0123-0128].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIPENG WANG whose telephone number is (571)272-5437. The examiner can normally be reached Monday-Friday 10-7.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Lee can be reached at 5712723667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZHIPENG WANG/Primary Examiner, Art Unit 2115
1 [0104] The texture map 416 may be a 2D image or other data structure representing the texture of the object 450. The texture map 416 may be mapped to the mesh 414 to provide an appearance (e.g., a color and/or a pattern) for the virtual surfaces defined by the mesh 414. Each pixel of the texture map 416 may correspond to, and provide detail for, a respective location on the virtual surfaces. At least a portion of the texture map 416 may be derived from measurements of the object 450. For example, the texture map 416 may include or be based on photographs and/or 3D scans of the object 450. The texture map 416 may also or instead be at least partially computer-generated through user input at a user device. In some implementations, the texture map 416 may be generated based on one or more material models that simulate the materials forming the surfaces of the object 450. For example, the texture map 416 may be segmented based on the different materials on the surfaces of the object 450, and a different material model may be used in the texture map 416 to define each of the different materials. Optionally, the texture map 416 may include 3D information for the surfaces of the object 450, such as a height map, for example. The height map may store surface elevation data to simulate bumps and wrinkles on the surfaces of the object 450. The height map may be used in bump mapping to simulate shadows on the surfaces of the object 450 and/or may be used in displacement mapping to simulate a 3D textured surface.
[0105] The lighting 418 defines the lighting conditions for the 3D model 312. The lighting 418 may provide object-oriented lighting and/or global illumination for the 3D model 412. Possible lighting models that may be used for simulating lighting in a 3D model include the Lambert model, the Phong illumination model, the Blinn-Phong illumination model, radiosity, ray tracing, beam tracing, cone tracing, path tracing, volumetric path tracing, Metropolis light transport, ambient occlusion, photon mapping, signed distance field and image-based lighting, for example. In some implementations, the lighting 418 is used to simulate the illumination of the mesh 414 when rendering the 3D model 412. This may include calculating the properties of light (e.g., the light color and/or intensity) that is incident on each virtual surface of the mesh 414. After determining the illumination on each virtual surface of the mesh 414, shading may be performed to calculate how each virtual surface appears as a result of that illumination. For example, shading may simulate light interactions on the virtual surfaces of the mesh 414. Shading of a virtual surface may be based on, inter alia, the material properties of the virtual surface, which may be defined by the texture map 416. For example, the texture map 416 may include material models that simulate how the different materials of the object 450 appear under different light intensity and/or color. These material models may include equations that define the diffuse, ambient and/or specular light interactions for the materials. Using the simulated illumination on a particular material, a material model for that material may output the appearance of the material. A bump map may further be used to simulate shadows on the virtual surfaces of the mesh 414. In this way, the lighting 418 may be used in conjunction with the mesh 414 and the texture map 416 to simulate the appearance of the object 450 in renders/renderings of the 3D model 412.
2 [0139] In some implementations, step 906 includes normalizing the lighting of the object 450 as depicted in the 3D model 412 to obtain normalized lighting for the 3D model 412. For example, the texture map 416 and/or the lighting 418 may be normalized before applying the lighting template to the 3D model 412. This normalization may be useful if, for example, the 3D model 412 was generated using images of the object 450 and/or its surrounding environment under substandard lighting conditions. The 3D model 412 may be modified to help remove the effects of the substandard lighting conditions and allow the lighting template to be accurately recreated in the modified 3D model. In some implementations, normalizing the lighting 418 may include deleting or removing at least a portion of the lighting 418 from the 3D model 412. This may provide an unlit 3D model of the object 450 that the lighting template may be added to. Alternatively or additionally, normalizing the lighting 418 may include modifying the lighting 418 to provide uniform or standard lighting (e.g., lightbox lighting) for the 3D model 412. Adding the lighting template to this uniform or standard lighting may include removing some of the lighting to create shadows, for example, in the modified 3D model.
3 [0144] In some implementations, step 906 includes applying an environment map to the 3D model 412 to produce the modified 3D model. For example, if at least a portion of the lighting template is in the form of an environment map, then the environment map may be added to the 3D model 412. The environment map may be added to the 3D model 412 at a position and orientation that is based on the position and/or orientation of one or both of the objects 450, 850 within their environments, as depicted in the digital media 700, 800. The environment map may be implemented to perform image-based lighting for the modified 3D model. For example, when rendering the modified 3D model, the lighting captured by the environment map may be projected onto the virtual surfaces of the mesh 414. This may result in the generation of a new light map for the modified 3D model. By way of example, natural light from a window in an environment map may be mapped to a virtual surface of the mesh 414 facing that window to realistically brighten that virtual surface. Any reflective surfaces on the mesh 414, as defined by the texture map 416, may also depict reflections from the environment map.
4 [0146] It should be noted that the order of steps 902, 904, 906 in FIG. 9 is shown by way of example. Other orders of steps 902, 904, 906 are also contemplated. In some implementations, steps 902, 906 are performed in conjunction. The 3D model 412 obtained in step 904 may be modified to include new lighting in step 906, and then the modified 3D model could be compared to the digital media 700 to determine if the new lighting matches the lighting depicted in the digital media 700. For example, a render of the modified 3D model may be generated based on the perspective of the object 450 as depicted in the digital media 700, and the render may be compared to the digital media 700 through image analysis. If the render substantially matches the digital media 700, then the new lighting used in the modified 3D model may be considered to be similar to the lighting depicted in the digital media 700. This may indicate that the modified 3D model includes a lighting template representing the lighting in the digital media 700. In this way, step 902 may include comparing the modified 3D model to the digital media 700 and determining, based on the comparison, that the modified 3D model matches the digital media 700. The match may not be an exact match, but might instead be a match within a defined threshold, for example. Responsive to determining that the modified 3D model matches the digital media, step 902 may then include determining that the modified 3D model includes the lighting template.
[0147] Alternatively, if the render of the modified 3D model significantly differs from the digital media 700, then the 3D model 412 may be modified again using different lighting. The 3D model may be modified multiple times, using different lighting each time, to generate multiple different modified 3D models. Each modified 3D model may be compared to the digital media 700 until one modified 3D model is found that matches the digital media 700. In this way, step 902 may include multiple iterations of generating a respective modified 3D model based on the 3D model 412 and a respective lighting template and comparing the respective modified 3D model to the digital media 700. This may be considered a trial-and-error approach to determining the lighting template in step 902. Optimization and/or regression algorithms may be applied to more quickly arrive at a lighting template that matches the lighting template depicted in the digital media 700.