DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an object data acquiring section”, “a state information acquiring section”, and “a correcting section” in claim 1 (incorporated by reference to claims 2-9).
“an object data acquiring section”, “a state information acquiring section”, and “a correcting section” in claim 11.
Applicant’s specification filed 7/18/2024 discloses the corresponding structure of the “correcting section” as control unit 11 operating according to a program stored in the storage unit 12” (paragraph 12 of applicant’s specification filed 7/18/2024), and the corresponding algorithm (disclosure including paragraphs 18-23 of applicant’s specification filed 7/18/2024), amounting to correcting the pixel value in a layer map based on acquired state information using a correction value calculation, or equivalents thereof.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 11 is/are rejected under 35 U.S.C. 101 because the claim(s) is/are directed to “a program for a computer”, which is software per se, and does not fall within the definitions of any of the statutory categories of invention. That is, software is neither a process nor a product (i.e. machine, manufacture, or composition of matter) and, therefore, the claims as a whole are non-statutory.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim(s) 1-9 and 11 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim(s), applicant recites the generic terms “an object data acquiring section”, and “a state information acquiring section”, which invoke 35 U.S.C. 112(f) and which the specification fails to provide adequate clarification as to the structures which perform the recited functions corresponding to the generic terms recited in the claim(s). As such, the claim(s) attempts to cover any and all structures or algorithms which perform the recited functions. As such, the specification does not reasonably convey to one of ordinary skill in the art that the applicant had possession of the claimed invention, failing to comply with the written description requirement.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-9 and 11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1-9 and 11, the disclosure does not provide adequate structure to perform the claimed function of “an object data acquiring section” and “a state information acquiring section”. In particular, applicant’s specification merely discloses that the sections include a “control unit 11 operating according to a program stored in the storage unit 12” (paragraph 12 of applicant’s specification filed 7/18/2024), but fails to provide the corresponding algorithm that is performed (see MPEP 2181(II)(B) - For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b).) Without further guidance and disclosure from the specification, one of ordinary skill in the art would not be able to reasonably apprise the scope of the claimed invention as it is unclear what structure is incorporated by reference under 35 U.S.C. 112(f) for performing the claimed functions. As such, the claims are rendered indefinite.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 5-8, 10 and 11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Beeler et al. (US 2020/0082572 A1).
Regarding claim 1, Beeler discloses:
An image processing device (Beeler, ¶34: systems, methods, and computer-readable medium for generating a comprehensive model for dynamic skin appearance that couples dynamic reflectance parameters for skin (albedo and specular reflectance) with dynamic geometry; ¶¶110-112 discloses computer architecture for device) comprising:
An object data acquiring section that acquires map data of a surface of a human object placed in a virtual space, the map data being used to determine an appearance of a region corresponding to a skin of the human object; (Beeler, Fig. 1, input image 102 – human object; ¶39: generate a three-dimensional (3D) rendering 104 of the input image 102 using an albedo map 106 and a shading map 108 of the image data; ¶42: Skin Reflectance Model - The disclosed techniques can model skin as a two-layer material composed of a rough dielectric layer, the stratum corneum, which accounts for reflection at the surface of the skin, and a diffuse layer that accounts for body reflection; ¶44: Skin albedo can mainly be the result of underlying concentrations of melanin and hemoglobin in the skin; Fig. 6 and ¶75: FIG. 6 shows exemplary images of a plurality of maps for dynamic appearance of skin, including four parameter maps per frame, namely albedo 602, diffuse ambient occlusion (AO) 604, specular attenuation 608, and high-resolution normals 608, which are maps are time-varying and can be used with existing rendering packages to render a face under different illumination conditions; ¶77: Figs. 8A and 8B discloses dynamic albedo map, where blood flow changes over time based on physiological effects, such as overheating or exercise, e.g. exercise albedo image 804 depicts the albedo map highlighting the color changes on the forehead of a subject; Fig. 14 and ¶87 discloses mapped data used to render realistic faces with dynamic albedo and specular shading modulated by specular intensity)
A state information acquiring section that acquires state information indicating a state in the virtual space; (Beeler, ¶44: albedo changes caused by varying hemoglobin concentration due to blood flow, where the blood concentration in skin can change either due to physiological effects, such as blushing, or physical effects such as muscular activity that actively presses hemoglobin out of one part of the skin and into another; ¶47: for given skin patch, albedo represented by equation (4) calculating albedo ρf at any given time using scalar hf describing blood-flow-induced change in hemoglobin concentration – scalar hf is state information; ¶50: dynamic albedo capture only requires the estimation of a single degree of freedom hf per texel and per frame) and
A correcting section that corrects a value included in the map data, on a basis of the state information, (Beeler, ¶47: for a given skin patch (texel), the subspace models for the disclosed techniques depict the albedo ρf at any point in time (frame) f as a combination of a base albedo ρ0 in Lab space plus a scalar hf describing blood-flow-induced change in hemoglobin concentration – equation (4); ¶96: At 1510, the techniques include generating a plurality of the time-varying parameter maps used for rendering the face; ¶97: One of the time-varying maps can include an albedo map depicting time-varying blood flow of the patch of skin comprising a shading free color of the face, where the albedo variation over time can be modeled as a one-dimensional curve in a color space code, with a one-dimensional curve can be precomputed and leaving a single free parameter of a position along the curve to be estimated; ¶106: the albedo can have both a static component and a dynamic component that appears during facial expressions or changes in body temperature)
wherein a spatial image illustrating an appearance of the virtual space is drawn by using the corrected map data (Beeler, ¶7: The albedo map can depict a time-varying blood flow of the patch of skin including a shading free color of the face, where one of the plurality of time-varying parameter maps can include a specular intensity map, where the specular intensity map can model light reflected off a surface of the patch of skin; ¶75: FIG. 6 shows exemplary images of a plurality of maps for dynamic appearance of skin; the output of the proposed technique can be a set of four parameter maps per frame, namely albedo 602, diffuse ambient occlusion (AO) 604, specular attenuation 608, and high-resolution normals 608, where the maps are time-varying and can be used with existing rendering packages to render a face under different illumination conditions; Fig. 14 and ¶87 discloses mapped data used to render realistic faces with dynamic albedo and specular shading modulated by specular intensity – see rendered column)
Regarding claim 10, the device of claim 1 performs the method of claim 5 and as such claim 10 is rejected based on the same rationale as claim 1 set forth above.
Regarding claim 11, Beeler discloses:
A program for a computer (Beeler, ¶¶110-112 discloses software components stored in computer-readable medium or memory, loaded and executable on a processor for performing the operations)
Further regarding claim 11, the program comprises software that performs the same functions the device of claim 1 and as such claim 11 is further rejected based on the same rationale as claim 1 set forth above.
Regarding claim 2, Beeler further discloses:
Wherein the object data acquiring section acquires a plurality of types of map data, and the correcting section calculates a correction value different for each of the plurality of types of map data, and uses the calculated correction value to correct a value included in corresponding map data (Beeler, ¶75: FIG. 6 shows exemplary images of a plurality of maps for dynamic appearance of skin; the output of the proposed technique can be a set of four parameter maps per frame, namely albedo 602, diffuse ambient occlusion (AO) 604, specular attenuation 608, and high-resolution normals 608, where the maps are time-varying and can be used with existing rendering packages to render a face under different illumination conditions; Fig. 14 and ¶87 discloses mapped data used to render realistic faces with dynamic albedo and specular shading modulated by specular intensity – see rendered column; also see discussion of albedo map, ¶¶76-78; diffuse ambient occlusion map, ¶79; specular intensity map, ¶¶80-81; dynamic normal map, ¶¶82-85)
Regarding claim 5, Beeler further discloses:
Wherein the state information includes information indicating a state of the human object itself (Beeler, ¶44: albedo changes caused by varying hemoglobin concentration due to blood flow, where the blood concentration in skin can change either due to physiological effects, such as blushing, or physical effects such as muscular activity that actively presses hemoglobin out of one part of the skin and into another; ¶47: for given skin patch, albedo represented by equation (4) calculating albedo ρf at any given time using scalar hf describing blood-flow-induced change in hemoglobin concentration – scalar hf is state information; ¶50: dynamic albedo capture only requires the estimation of a single degree of freedom hf per texel and per frame)
Regarding claim 6, Beeler further discloses:
Wherein the state information includes elapsed time information indicating how long the state lasts (Beeler, ¶47: for a given skin patch (texel), the subspace models for the disclosed techniques depict the albedo ρf at any point in time (frame) f as a combination of a base albedo ρ0 in Lab space plus a scalar hf describing blood-flow-induced change in hemoglobin concentration; ¶78:
FIG. 8B shows that facial expressions can also cause blood flow (shown as forehead crop over time). This blood flow can be apparent for several frames after the expression returns to normal due to hysteresis over time. The disclosed techniques can recover both of these effects in the captured performance. Blood flow is not instantaneous, which causes hysteresis effects over time. This effect can be shown in FIG. 8B, where it takes several frames until blood has fully returned after releasing an expression. By constraining albedo to change along a one-dimensional subspace, which the techniques can precompute per actor as described in FIG. 8B, the proposed method recovers high-quality per-frame albedo maps. FIG. 8B also depicts a first albedo map 806, a second albedo map, and a third albedo map 808 which can show the albedo change due to blood flow from the expression.)
Regarding claim 7, Beeler further discloses:
Wherein the state information acquiring section acquires information specifying an attribute and/or a characteristic of the human object together with the state information, and the correcting section performs correction of contents that differ depending on the specified attribute and/or characteristic (Beecher, ¶75: FIG. 6 shows exemplary images of a plurality of maps for dynamic appearance of skin; the output of the proposed technique can be a set of four parameter maps per frame, namely albedo 602, diffuse ambient occlusion (AO) 604, specular attenuation 608, and high-resolution normals 608, where the maps are time-varying and can be used with existing rendering packages to render a face under different illumination conditions; Fig. 14 and ¶87 discloses mapped data used to render realistic faces with dynamic albedo and specular shading modulated by specular intensity – see rendered column; also see discussion of albedo map, ¶¶76-78; diffuse ambient occlusion map, ¶79; specular intensity map, ¶¶80-81; dynamic normal map;
Also ¶39 :
The system can generate a three-dimensional (3D) rendering 104 of the input image 102 using an albedo map 106 and a shading map 108 of the image data. The techniques allow for modifying the expression of the face for the input image 102. For example, the techniques can produce a second albedo map 110 and a second shading map 112 for a second expression. Further, the technique can generate another relighting rendering 114 that can be created under lighting conditions that differ from the lighting conditions for the input image 102.)
In other words, in addition to different parameters included as “attributes or characteristics”, expression is a different “attribute or charactierstic”, which the albedo map is used with – see Fig. 14 and ¶87 discloses mapped data used to render realistic faces with dynamic albedo and specular shading modulated by specular intensity – see rendered column; also see discussion of albedo map)
Regarding claim 8, Beeler further discloses:
Wherein the correcting section calculates a correction value according to the state information by using a calculation model prepared in advance, and corrects the value included in the map data by using the calculated correction value (Beeler, ¶47: for a given skin patch (texel), the subspace models for the disclosed techniques depict the albedo ρf at any point in time (frame) f as a combination of a base albedo ρ0 in Lab space plus a scalar hf describing blood-flow-induced change in hemoglobin concentration – equation (4); ¶96: At 1510, the techniques include generating a plurality of the time-varying parameter maps used for rendering the face; ¶97: One of the time-varying maps can include an albedo map depicting time-varying blood flow of the patch of skin comprising a shading free color of the face, where the albedo variation over time can be modeled as a one-dimensional curve in a color space code, with a one-dimensional curve can be precomputed and leaving a single free parameter of a position along the curve to be estimated; ¶106: the albedo can have both a static component and a dynamic component that appears during facial expressions or changes in body temperature)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3 and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Beeler et al. (US 2020/0082572 A1) in view of
Frerichs et al. (Applicant IDS reference filed 10/31/2025 - Frerichs et al., “Computer graphics simulation of natural mummification by desiccation,” Computer Animation and Virtual Worlds, June 8, 2020, 31(6):e1927, 21 pages)
Regarding claim 3, the limitations included from claim 1 are rejected based on the same rationale as the rejection of claim 8 provided above. Further regarding claim 3, Beeler teaches the state information affecting the change of the image map data can be a result of temperature (Beeler, ¶106: the albedo can have both a static component and a dynamic component that appears during changes in body temperature). It would have been obvious to one of ordinary skill in the art to correlate body temperature with environment temperature, as it is well-known that environmental temperature can affect body temperature.
Frerichs, however, discloses:
Wherein the state information includes environmental information indicating an environment of the virtual space (Frerichs, pp. 2-3, Section 3, ¶1: “A well-known example is mummification by desiccation in hot, dry, and arid environments”, where Frerichs disclosed technology is directed to natural mummification by desiccation; Page 11, Section 8.3, ¶1: “This work represents the skin as two layers; the epidermis (top) and dermis layer (bottom), similar to the Donner et al. method, and each layer is rendered separately. In order to get the final skin render, Frerichset al. apply a screen space diffuse approximation approach for subsurface scattering on each layer individually. The epidermis and dermis maps are then convolved into the final skin map”, equation (36), with Ae representing the absorption by melanin.”; Section 8.3, ¶2: “Themelanin distribution map specifies the amount of the chromophore melanin in the body's epidermis layer. This is used tocompute the light absorption by melanin when computing the hemoglobin contribution to the skin coloration inspired by Donner et al. 34 Coloration changes within the dermis were simulated using a look up texture based on hemoglobinh(x, y) and oxygen o(x, y) content information for each pixel (x, y)”; Section 8.3, ¶3: “In order to use this method to render skin coloration changes caused by mummification, the dermis color look-up texture, absorption by melanin and convolution weights need to be adjusted in order to account for the affects of moisture loss in the skin”; Furthermore, Section 8.3 discloses altering texture based on hemoglobin degradation, using equation (38), where “The hemoglobin degradation level is a combination of the oxygen o(x, y) and humidity m(x, y) levels and is computed” – Fig. 3 and p. 12 is a resulting blood color look-up texture based on the results of the accounting of hemoglobin degradation using humidity levels; Also see Fig. 9 showing shading based on hydration stages)
Both Beeler and Frerichs are directed to rendering 3D facial images using dynamic skin conditions for rendering an image. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for rendering dynamic changes to a virtual object in an virtual environment using skin condition parameters to render realistic images of a dynamic human model as provided by Beeler, by incorporating the use of the humidity parameter for determining color mapping effects to skin on a virtual model as provided by Frerichs, using known electronic interfacing and programming techniques. The modification results in an improved simulation of human skin in a virtual space by accounting for additional realistic effects and conditions, and rendering more dynamic and interactive effects.
Regarding claim 4, Beeler teaches the state information affecting the change of the image map data can be a result of temperature (Beeler, ¶106: the albedo can have both a static component and a dynamic component that appears during changes in body temperature). It would have been obvious to one of ordinary skill in the art to correlate body temperature with environment temperature, as it is well-known that environmental temperature can affect body temperature.
Beeler modified by Frerichs, however, further discloses:
Wherein the environmental information incudes either a temperature or humidity in the virtual space (Frerichs, pp. 2-3, Section 3, ¶1: “A well-known example is mummification by desiccation in hot, dry, and arid environments”, where Frerichs disclosed technology is directed to natural mummification by desiccation; Section 8.3, ¶3: “In order to use this method to render skin coloration changes caused by mummification, the dermis color look-up texture, absorption by melanin and convolution weights need to be adjusted in order to account for the affects of moisture loss in the skin”; Furthermore, Section 8.3 discloses altering texture based on hemoglobin degradation, using equation (38), where “The hemoglobin degradation level is a combination of the oxygen o(x, y) and humidity m(x, y) levels and is computed” – Fig. 3 and p. 12 is a resulting blood color look-up texture based on the results of the accounting of hemoglobin degradation using humidity levels; Also see Fig. 9 showing shading based on hydration stages)
Both Beeler and Frerichs are directed to rendering 3D facial images using dynamic skin conditions for rendering an image. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for rendering dynamic changes to a virtual object in an virtual environment using skin condition parameters to render realistic images of a dynamic human model as provided by Beeler, by incorporating the use of the humidity parameter for determining color mapping effects to skin on a virtual model as provided by Frerichs, using known electronic interfacing and programming techniques. The modification results in an improved simulation of human skin in a virtual space by accounting for additional realistic effects and conditions, and rendering more dynamic and interactive effects.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Beeler et al. (US 2020/0082572 A1) in view of
Soares et al. (US 11,830,182 B1).
Regarding claim 9, the limitations included from claim 8 are rejected based on the same rationale as the rejection of claim 8 provided above. Further regarding claim 9, Beeler further discloses:
Wherein the calculation model is generated by (Beeler, ¶56: The techniques can first conduct a calibration process that may be required only once per actor, where given the pre-acquired hemoglobin direction v, the techniques capture the origin of the albedo subspace for every texel, and base ρ.sup.0 captures the full skin pigmentation and its spatial detail, achieved by requiring the actor to hold a neutral expression while also slowly rotating their head up-down, left-right, to form a cross pattern; ¶64 discloses camera rig for capturing images of actor skin; ¶72: In a non-limiting embodiment using a digital single-lens reflex (SLR) camera with a mounted ring flash, the technique can photograph a small patch of skin in burst mode, immediately after the actor presses firmly on the skin with their fingers, where sequence of photos provides a time-varying measure of hemoglobin concentrations, to which the technique can fit a line in Lab space)
Beeler does not explicitly disclose the use of machine learning.
Soares discloses:
Wherein the calculation model is generated by machine learning using map data obtained by capturing an image of a real person as training data (Soares, [1:66-2:14]: training a texture autoencoder based on blood flow image data captured using a photogrammetry system, where many images of a subject or subjects are captured making different expressions such that ground truth data can be obtained between an expression and how blood flow appears in the face, and the texture autoencoder may consider as input a series of expressions which results in a particular 2D blood flow texture map; [4:37-53]: training module 122 may train a model, such as a neural network, based on image data from a single subject or multiple subject; [4:54-67]: upon collecting training data, training model extracts facial skin tone and texture of face images in training data, the training module 122 may remove the albedo map (e.g., through subtraction or division), and result of the training may be a model that provides the blood flow texture maps)
Both Beeler and Soares are directed to rendering 3D facial images using dynamic blood flow effects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for rendering dynamic changes to a virtual object in an virtual environment using dynamic blood flow parameters to render more realistic images of a dynamic human model as provided by Beeler, by incorporating the machine learning for obtaining a dynamic blood flow model for rendering dynamic effects to a human model as provide by Soares, using known electronic interfacing and programming techniques. The modification merely substitutes one known algorithmic technique for obtaining a dynamic blood flow texture model for another, yielding predictable results of using known machine learning to obtain the data rather than a less robust programmed algorithm. Moreover, the modification results in an improved rendering of dynamic realistic effects for a 3D human model by using a more robust system for automating the calculation of the blood flow model using machine learning, providing more realistic results and reducing human intervention for training a system.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Poirier, Guillaume, “Human Skin Modelling and Rendering”, University of Waterloo, URI http://hdl.handle.net/10012/1082, (2004): Generally, reference discusses the reference discusses approaches for dealing with the complexity of rendering skin and its variations, using a number of parameters, for use in a number of fields requiring realistic skin model rendering, related to the present application. Of particular interest to the present application is discussion of skin blushing, based on temperature, as discussed in section 4.2.7 on page 93:
Colour changes can occur in skin due to constriction or dilatation of blood vessels. This can be caused by emotions, physical activity, temperature, etc. We provide the artist with tools to modify the face coloration. Pigment segmentation can provide general coloration modification but lacks user control. We experimented with a very simple technique [172] that gives a bit more control over the areas we want the blushing to occur (Section D.2). The user selects central points on the model and a Phong-like shading is used to create a blushing area around the selected points. The parameters to control the blushing effect are a blush colour and attenuation. The blushing technique is illustrated in Plates XXI and XXII. Instead of using a simple Phong-like shading for blushing, we can also use our skin reflectance model along with a texture map representing the volume fraction of blood at each pixel. It is therefore possible to manipulate the volume fraction of blood in the superficial plexus at a particular location (Section 4.1). The initial texture map containing the blood concentrations is computed following Tsumura et al. [214] (Section 4.2.8). Instead of being acquired from a photograph, the texture map could also be painted by an artist. We are then able to modify non-linearly the hemoglobin quantities in a user-specified area to simulate blushing.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616