Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/30/26 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 7, 10, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2011/0248992 A1 (hereinafter Van Os) in view of U.S. Patent Application Publication 2020/0001465 A1 (hereinafter Shin).
Regarding claim 1, the limitations “A face model building method, applicable to a face model building system, the face model building method comprising: providing the face model building system comprising a server installed with a model building platform, a portable electronic device installed with an editing platform … obtaining a plurality of facial feature animation objects and a plurality of object parameters respectively corresponding to the facial feature animation objects … integrating the facial feature animation objects according to the object parameters to generate a three-dimensional face model” are taught by Van Os (Van Os, e.g. abstract, paragraphs 25-51, describes an avatar editing system for avatars comprising a plurality of face elements, e.g. paragraphs 27, 36, which are selected by a user through an interface, where for each avatar element category, the user can select different parameters, i.e. a graphic parameter as in figure 2A, color parameter as in figure 2B, as well as position and/or size parameters as in figures 3A, 3B, 5A-C. Further, Van Os teaches that the avatar face elements are animated to simulate human facial expressions corresponding to different emotions, e.g. paragraphs 48-51. That is, Van Os’ elements correspond to the claimed facial feature animation objects having a plurality of corresponding object parameters, where said animation objects are integrated according to the object parameters to generate a 3D face model, i.e. when rendering the animations using the edited avatar model. Finally, Van Os, e.g. paragraphs 25, 88, 91-94, teaches that the user may use a mobile computing device which provides the avatar editing environment as a web page provided from a server using a service, where the server(s) providing service(s) also can share the custom avatar models with other user device for display in different applications, i.e. as claimed the server is installed with a model building platform, i.e. the service(s) of paragraphs 91-94, and the user device is installed with the editing platform, i.e. the avatar editing environment web page provided by the server/service.)
The limitations “wherein the facial feature animation objects comprise a plurality of two-dimensional facial feature animation objects and at least one three-dimensional facial feature animation object;” are taught by Van Os (Van Os, e.g. paragraphs 36, 50, teaches that some of the elements may be 2D textures rendered onto the 3D model of the avatar head, e.g. as with the nose, mouth, and eyebrow elements in the exemplary avatar of figures 1-5, and some elements are 3D objects, as with the 3D eyes of figure 6A, where all of said elements correspond to the claimed facial feature animation objects, e.g. Van Os, paragraph 48, i.e. as claimed there are a plurality of 2D facial feature animation objects and at least one 3D facial feature object.)
The limitations (addressed out of order) “wherein the face model building system is provided with a plurality of pieces of emotion data, and the step of obtaining a plurality of facial feature animation objects and a plurality of object parameters respectively corresponding to the facial feature animation objects comprises: selecting one of the pieces of emotion data according to an emotion instruction; and adjusting the object parameters according to the selected piece of emotion data … wherein the emotion instruction is obtained from the portable electronic device” are taught by Van Os (Van Os, e.g. paragraphs 48-51, teaches that the avatar faces are animated to simulate facial expressions associated with a plurality of different emotions, e.g. happy, sad, anger, surprise, boredom, where the simulation of each facial expression is performed by animating the elements such as eyes, mouth, ears, eyebrows, and may be triggered in response to predetermined trigger events. That is, the display parameters of each element are modified according to the particular expression/emotion, and the particular expression/emotion may be triggered based on a predetermined event, corresponding to the claimed adjusting object parameters of the facial feature animation objects according to a piece of emotion data selected according to an emotion instruction. Finally, Van Os, e.g. paragraph 49, teaches that the user can set the expressions/emotions to trigger in response to various trigger events, i.e. the user provides the emotion instruction which selects the emotion data for adjusting the object parameters, where, as noted above, Van Os, e.g. paragraphs 25, 88, 91-94, the avatar editing environment is a web page operating on the mobile device, i.e. said expression/emotion instruction is obtained from the user through the portable electronic device as claimed.)
The limitations “wherein the selected piece of emotion data corresponding to the emotion instruction includes adjustment amounts of the object parameters; and the selected piece of emotion data corresponding to the emotion instruction also includes script data to present a dynamic change of the three-dimensional face model” are taught by Van Os (One of ordinary skill in the art would recognize that Van Os’ animation of the face elements according to the emotion/facial expression relies on specifying different object parameter values, i.e. the claimed adjustment amounts of the object parameters, i.e. as in paragraphs 49, 50, an animated parameter, e.g. the chest contraction and expansion, the direction vector of the eyes, is adjusted to different values/amounts to animate the corresponding element, and analogously animating the facial elements such as the eyes, mouths, ears, eyebrows to simulate the facial expressions such as happy, sad, angry, surprise, boredom, associated with the triggered emotion would inherently require different element animation parameters, i.e. the claimed adjustment amounts of the object parameters, e.g. to indicate surprise, eyebrow position parameter values would be set to a higher vertical value in comparison to the eyebrow position parameter values for indicating sadness or boredom. Furthermore, the different element animation parameters determined for a user’s customized avatar are determined according to pre-determined animation data, i.e. in the examples of paragraphs 48, 49, pre-determined animation data is used to animate the avatar’s face and body to appear to be waiting, sleeping, impatient, etc., and in the example of paragraph 50 the eyes are animated according to a cursor tracking program. That is, as noted above, one of ordinary skill in the art would recognize that Van Os’ animation of the face elements according to the emotion/facial expression relies on specifying different object element parameter values, and one of ordinary skill in the art would further have understood that the different object element parameter values are determined using pre-determined animation data which can be used to animate any user’s customized avatar, i.e. the data for the triggered emotion includes the claimed script data used to present a dynamic change of the face model.)
The limitation “wherein an extra animation object is added on the face base according to the emotion instruction” is not explicitly taught by Van Os (While, as noted above, Van Os, e.g. paragraphs 48-51, teaches that the avatar faces are animated to simulate facial expressions associated with a plurality of different emotions, e.g. happy, sad, anger, surprise, Van Os does not explicitly teach that the emotion expression animations comprise adding extra animation objects corresponding to the emotion.) However, this limitation is taught by Shin (Shin, e.g. abstract, paragraphs 37-219, discloses a system comprising an AI controlled robot, wherein the robot presents a digital face on a display element, e.g. paragraphs 102, 134. Shin, e.g. paragraphs 132-219, describes a system allowing a user to customize the face model by changing the parts thereof, e.g. paragraphs 142, 144, 173, 177, 191-218, i.e. analogous to Van Os’ avatar face models, Shin’s face model comprises changeable parts. Further, Shin, e.g. paragraphs 147-151, 181-186, 212-217, teaches that the face model can express indicated emotions by correcting the face parts, i.e. analogous to the animation of Van Os and the claimed adjusting of object parameters, and may include additional elements based on the indicated emotion, e.g., figure 12b adds a wrinkle to indicate surprise, figure 12d adds tears to indicate sadness, figure 17a adds a blushing element to indicate embarrassment.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify/implement Van Os’ avatar editing system to include Shin’s added emotion elements in order to more clearly convey the intended emotion. In the modified system, Van Os’ emotion animations would include adding additional elements to the face for at least some of the emotions, i.e. as in Shin, Figures 12, 17, some emotions may be expressed with simple modification of the existing face elements, and some include additional elements in addition to modifying the existing face elements.
The limitations (addressed out of order) “providing the face model building system comprising a server installed with a model building platform, a portable electronic device installed with an editing platform, and a robot as a display platform … transmitting the three-dimensional face model from the server to the robot” are partially taught by Van Os (As discussed above, Van Os, e.g. paragraphs 25, 88, 91-94, teaches that the user may use a mobile computing device which provides the avatar editing environment as a web page provided from a server using a service, where the server(s) providing service(s) also can share the custom avatar models with other user device for display in different applications, i.e. as claimed the server is installed with a model building platform, i.e. the service(s) of paragraphs 91-94, and the user device is installed with the editing platform, i.e. the avatar editing environment web page provided by the server/service. While Van Os, e.g. paragraphs 51, 91-94, teaches that the custom avatar models/animations can be shared by the server(s)/service(s) for use in a variety of applications, Van Os does not explicitly suggest a robot as a display platform, or, by extension, transmitting the custom avatar models/animations to a robot acting as a display platform. Further, while Shin, as discussed below, does teach using a robot as a customizable animated face display platform, the above modification of Van Os’ avatar editing system to include Shin’s added emotion elements does not include or rely on Shin’s robot display platform per se, such that Van Os’ system as modified above does not include the claimed robot display platform.) However, this limitation is taught by Shin (As noted above, Shin, e.g. abstract, paragraphs 37-219, discloses a system comprising an AI controlled robot, wherein the robot presents a digital face on a display element, e.g. paragraphs 102, 134. Shin, e.g. paragraphs 132-219, describes a system allowing a user to customize the face model by changing the parts thereof, e.g. paragraphs 142, 144, 173, 177, 191-218, i.e. analogous to Van Os’ avatar face models, Shin’s face model comprises changeable parts. That is, Shin’s robot comprises a display platform for displaying a customized animated face, wherein the customized animated face model can express indicated emotions by correcting the face parts, e.g. paragraphs 147-151, 181-186, 212-217. Finally, Shin, e.g. paragraph 219, teaches that the method of generating a face design for the robot may be performed using a user terminal, followed by transmitting the generated face design to the robot, i.e. Shin teaches that the customized animated face displayed on the robot may be received from a remote computer used by the user to perform the customization.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Van Os’ avatar editing system, including Shin’s added emotion elements, to include Shin’s robotic platform displaying a customized animated face as one of the variety of applications supported by Van Os’ server(s)/service(s) in order to support additional applications using the same avatar editing environment system, i.e. Van Os intends for the system to support any compatible application for using a customized avatar model, and one of ordinary skill in the art would recognize Shin’s robotic platform as an analogous application as discussed above. In Van Os’ modified system, Van Os’ server(s)/service(s) supporting the avatar editing environment system would receive custom avatar models/animations from a user using the avatar editing environment web page on their mobile device as in Van Os’ unmodified system, and the server(s)/service(s) would provide the custom avatar models/animations to Shin’s robot for display thereon, analogous to Shin, paragraph 219, i.e. as claimed, the portable electronic device provides the emotion instruction and three-dimensional face model customization, the server installed with the model building platform stores the emotion instruction and three-dimensional face model face model customization data received from the portable electronic device, and the server transmits the emotion instruction and three-dimensional face model customization data to the robot for display thereon.
Regarding claim 2, the limitation “wherein the three-dimensional facial feature animation object is an eye part” is taught by Van Os (As noted in the claim 1 rejection above, Van Os, e.g. paragraphs 36, 50, teaches that the model includes 3D elements, such as the 3D eyes of figure 6A.)
Regarding claim 3, the limitation “wherein the eye part comprises a white part and an eyeball part, wherein a size of the white part is fixed” is taught by Van Os (As noted in the claim 1 rejection above, Van Os, e.g. paragraphs 36, 50, teaches that the model includes 3D elements, such as the 3D eyes of figure 6A. As shown in figure 6A, the size of the 3D eyeballs do not change during animation, the eyeballs are simply rotated in place, such that the size of the white part is fixed. Further, the eyeballs comprise the claimed white part and eyeball part, i.e. the pupil, corresponding to parts b1 and b2 in Applicant’s figures 2A,2B, paragraph 25.)
Regarding claim 7, the limitation “wherein the object parameters comprise at least one of a position parameter, a size parameter, and a color parameter” is taught by Van Os (Van Os, e.g. paragraphs 41, 43, 46, 47, figures 3A, 3B, 5A-C, teaches that the parameters that can be changed by the user include selecting different colors and adjusting the size and position of the elements on the face, i.e. color, position, and size parameters.)
Regarding claim 10, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above.
Regarding claim 11, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, i.e. the facial feature animation objects are integrated using initial parameters and the adjusted parameters.
Claims 4-6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2011/0248992 A1 (hereinafter Van Os) in view of U.S. Patent Application Publication 2020/0001465 A1 (hereinafter Shin) as applied to claims 1 and 7 above, and further in view of U.S. Patent Application Publication 2021/0312696 A1 (hereinafter Liu).
Regarding claim 4, the limitation “wherein the two-dimensional facial feature animation objects comprise … a nose part, and two eyebrow parts, … a mouth part” is taught by Van Os (Van Os, e.g. paragraphs 27, 29, 32, 34, 36, teaches that the user can start with a blank face having a modifiable shape and skin color, as well as selecting parameters for a nose, mouth, and eyebrows, where said elements may be 2D textures.)
The limitation “wherein the two-dimensional facial feature animation objects comprise a face base … wherein the face base comprises a mouth part” is not explicitly taught by Van Os (As noted above, Van Os teaches that the mouth may be a 2D texture element applied to the 3D face model, i.e. if the avatar face comprised a 2D base texture analogous to a skin color, then the combined elements of the avatar would include the claimed face base comprising the mouth part. However, while Van Os, e.g. paragraph 27, teaches that the blank face element or predefined face element, i.e. the face base, may have a modifiable shape and skin color, Van Os does not teach that the blank/default face skin is/has a 2D texture, per se.) However, this limitation is taught by Liu (Liu, e.g. abstract, paragraphs 37-144, describes a system for personalizing the face of a 3D character model, which may be animated, e.g. paragraph 38, where a user can select different 2D textures to be applied to the 3D face model, e.g. paragraphs 65-74. Liu, e.g. paragraphs 39-41, 51-60, teaches that the 3D face model vertices include texture coordinates referencing the 2D UV coordinate systems of the UV1, UV2, UV3, and UV4 textures, where the UV1 texture is a face base mesh, e.g. paragraphs 52, 58, figure 2, and UV2-UV4 are eye, mouth, and eyebrow 2D textures, respectively. That is, UV1 corresponds to the claimed face base mesh, and when mapped onto the avatar along with the UV3 and UV4 textures, analogous to Van Os’ blank avatar face, mouth, and eyebrow elements, the face base mesh would comprise the 2D mouth and eyebrow elements, corresponding to the claimed face base element comprising a mouth part element. It is further noted that Liu, e.g. paragraph 58, teaches that the UV1/basic face texture may include part of the modifiable texture component, i.e. as shown in figure 2, the UV1 mesh includes part of the texture for the mouth which would be combined with part of the UV3 texture to render the personalized representation, i.e. Liu teaches that the face base may directly comprise part of the 2D texture content of the mouth part, prior to performing the texture mapping to render the character.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify/implement Van Os’ avatar editing system, including Shin’s added emotion elements, including Shin’s robotic platform displaying a customized animated face as a supported application, to include Liu’s basic face texture element as one of elements that can be selected by the user for an avatar. As noted above, Van Os teaches that the blank face element or predefined face element, i.e. the face base, may have a modifiable shape and skin color, while Van Os does not teach that the blank/default face skin is/has a 2D texture, per se, and Liu teaches that the same 3D model can have different basic face textures applied, e.g. as in figure 2, a single 3D model can have different UV1 textures used, corresponding to different predefined characters as in Van Os, paragraph 27, where the skin color of the basic face texture can be changed, e.g. Liu, paragraph 111-115, where both Van Os, e.g. paragraph 36, and Liu, e.g. paragraphs 51-60, teach the use of additional 2D texture face elements applied to the 3D model/basic face texture. In Van Os’ modified system, the blank/predefined face model would include a base 2D texture representing the skin and other components, analogous to Liu, figure 2, UV1, which would have the other 2D texture elements rendered over the base 2D texture, as taught by Van Os and Liu, where the base 2D texture element corresponds to the claimed face base comprising a mouth part.
Regarding claim 5, the limitation “wherein a periphery of the face base is fixed” is taught by Van Os in view of Liu (As discussed in the claim 4 rejection above, In Van Os’ modified system, the blank/predefined face model would include a base 2D texture representing the skin and other components, analogous to Liu, figure 2, UV1, which would have the other 2D texture elements rendered over the base 2D texture, as taught by Van Os and Liu, where the base 2D texture element corresponds to the claimed face base comprising a mouth part. As shown in Liu, figure 2, UV1 includes fixed portions of the 3D face model, e.g. the hair and chin components, which would not be animated as part of Van Os’ animations, e.g. paragraph 48 indicates that the face elements which are part of the expression are animated, whereas the periphery of the UV1 texture corresponds to parts of the face which are not changed by a changing expression.)
Regarding claim 6, the limitation “wherein the two-dimensional facial feature animation objects further comprise a tooth part, and the tooth part is adjacent to the mouth part” is taught by Van Os (Van Os, e.g. paragraph 27, figures 1D, 1E, 2A, 3A-3C, 4A, 4B, 5A-5C, teaches that the avatar face elements include teeth, which are shown adjacent to the mouth part.)
Regarding claim 8, the limitation “wherein the position parameter and the size parameter are both two-dimensional parameters” is implicitly taught by Van Os (As discussed in the claim 7 rejection above, Van Os, e.g. paragraphs 43, 46, 47, figures 5A-C, teaches that the parameters that can be changed by the user include adjusting the size and position of the elements on the face, where the elements may be 2D textures, e.g. paragraph 36. Van Os, paragraph 47, indicates that the movement of the 2D texture elements may be limited to a 2D region such as a rectangle, i.e. the position element is a 2D parameter. Further, Van Os, paragraph 46, indicates that the user can both resize and stretch the elements, i.e. while resizing may be a 1 dimensional scaling operation, stretching corresponds to a two-dimensional scaling operation which has a larger amount of scaling in a first dimension, e.g. vertically, in comparison to a second dimension, e.g. horizontally. That is, one of ordinary skill in the art would have understood that Van Os’ 2D texture elements being movable, resizable, and stretchable, to implicitly teach that the position and size parameters are two-dimensional parameters. However, in the interest of compact prosecution, Liu is cited for explicitly teaching that character facial feature 2D texture elements are mapped onto a 3D character face using 2D UV coordinate mapping, i.e. comprising 2D position and 2D size parameters.) However, this limitation is taught by Liu (Liu, e.g. abstract, paragraphs 37-144, describes a system for personalizing the face of a 3D character model, which may be animated, e.g. paragraph 38, where a user can select different 2D textures to be applied to the 3D face model, e.g. paragraphs 65-74. Liu, e.g. paragraphs 39-41, 51-60, teaches that the 3D face model vertices include texture coordinates referencing the 2D UV coordinate systems of the UV1, UV2, UV3, and UV4 textures, where the UV1 texture is a face base mesh, e.g. paragraphs 52, 58, figure 2, and UV2-UV4 are eye, mouth, and eyebrow 2D textures, respectively. Liu, e.g. paragraph 112, Table 3, further teaches that a 2D texture image is generically mapped to the UV1 coordinate space using 2D coordinates relative a 2D position TattooTexUoffset, TattooTexVoffset, with x and y coordinates within the tattoo texture scaled according to a single TattooTexSize. That is, while one of ordinary skill in the art would have found it implicit that Van Os’ 2D texture elements are mapped to the 3D face model using 2D position and size parameters, Liu, e.g. paragraphs 51-58, 112, shows that, as noted above, one of ordinary skill in the art would know that the 2D texture elements would be positioned using a 2D coordinate, analogous to the offset U and V coordinates, and a 1-dimensional scaling can be applied using a single size parameter, analogous to the size value, where Van Os’ stretching effect would require at least two size parameters, i.e. a horizonal size/scaling value and vertical size/scaling value applied to the x and y coordinates, respectively.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify/implement Van Os’ avatar editing system, including Shin’s added emotion elements, including Shin’s robotic platform displaying a customized animated face as a supported application, to using Liu’s 2D UV texture mapping technique for modifying the position, size, and stretch of Van Os’ 2D face elements, as in Van Os paragraphs 43, 46, 47. As noted above, Van Os teaches that 2D texture element position is a 2D parameter, and that the user can both resize and stretch the elements, i.e. while resizing may be a 1 dimensional scaling operation, stretching corresponds to a two-dimensional scaling operation which has a larger amount of scaling in a first dimension, e.g. vertically, in comparison to a second dimension, e.g. horizontally, which is shown by Liu’s discussion of 2D UV texture mapping, i.e. showing that mapping a 2D tattoo texture to the basic face texture UV coordinate system relies on a 2D UV position coordinate, and sizing/scaling the texture coordinates according to the size of the 2D tattoo texture being applied, which would require at least 2 size parameters to differently scale the x and y coordinates to implement Van Os’ stretching operation.
Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2011/0248992 A1 (hereinafter Van Os) in view of U.S. Patent Application Publication 2020/0001465 A1 (hereinafter Shin) as applied to claims 1 and 10 above, and further in view of U.S. Patent Application Publication 2011/0296324 A1 (hereinafter Goossens).
Regarding claim 12, the limitations “wherein the editing platform comprises a human-machine interface, and the human-machine interface includes a preview window, an adjustment window, an accessory window” are taught by Van Os (Van Os, e.g. paragraphs 28-35, 38-45, figures 1-4, teaches displaying the avatar with the current customizations as they are selected, corresponding to the claimed preview window, receiving manual adjustments to size, position, and rotation of the avatar elements, and picker windows for selecting different colors, corresponding to the claimed adjustment window(s), as well as category picker windows for accessories such as hats as in figure 2A, corresponding to the claimed accessory window(s).)
The limitation “and an emotion setting window, and wherein the emotion setting window is for receiving the emotion instruction” is implicitly taught by Van Os (As discussed in the claim 1 rejection above, Van Os, e.g. paragraph 49, teaches that the user can set the expressions/emotions to trigger in response to various trigger events, i.e. the user provides the emotion instruction which selects the emotion data for adjusting the object parameters. Van Os does not explicitly teach the use of an emotion setting window, per se, i.e. Van Os only provides the example of selection from a menu, although one of ordinary skill in the art would have found it implicit that Van Os’ system could provide a window based interface for setting the expressions/emotions and trigger events, i.e. Van Os’ interface as shown in the figures is window based, which implicitly suggests using a window based interface for the setting interface described in paragraph 49. In the interest of compact prosecution, Goossens, disclosing a related avatar editing system to Van Os, is cited for teaching details of an emotion setting window.) However, this limitation is taught by Goossens (Goossens, e.g. abstract, paragraphs 41-130, describes a system for customizing avatar expressions/animations, including customizing/modifying the expressions/animations, per se, e.g. paragraphs 50-57, 65-83, customizing/modifying the triggers, e.g. paragraphs 86-102, and setting the expression/animation/emotion to be expressed in response to a particular trigger, e.g. paragraphs 103-126, wherein the user interface for performing these interactions comprises a window display, e.g. figures 6A, 6B, i.e. Goossens’ interface corresponds to the claimed emotion setting window for receiving the emotion instruction. Further, Goossens, e.g. paragraph 47, cites Van Os as an exemplary avatar editing environment that Goossens’ interface is compatible with, i.e. Van Os’ parent provisional application 61/321,840 is incorporated by reference in Goossens’ disclosure in its entirety, such that one of ordinary skill in the art would understand that Goossens’ interface could be used to perform the functionality described by Van Os in paragraph 49.)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify/implement Van Os’ avatar editing system, including Shin’s added emotion elements, including Shin’s robotic platform displaying a customized animated face as a supported application, to include Goossens’ avatar expression/animation/trigger customization/modification interface to provide the avatar expression/animation/trigger customization/modification functions described by Van Os, paragraph 49, because Goossens’ interface is both compatible with and intended for use with Van Os’ avatar editing system, e.g. Goossens, paragraph 47, and because Van Os does not describe the interface of paragraph 49 in significant detail, motivating one of ordinary skill in the art to consider other analogous references, such as Goossens, for details of an interface for performing the functionality described in Van Os, paragraph 49.
Regarding claim 13, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 12 above.
Response to Arguments
Applicant's arguments filed 1/30/26 have been fully considered but they are not persuasive.
Applicant argues that the rejection maps the claimed emotion instructions to the predetermined trigger events of Van Os, paragraphs 49-50, and that Van Os does not teach script data for presenting a dynamic change of the 3D face model. Applicant’s emphasis of the citation of the 11/21/25 Office Action misconstrues the claim interpretation. That is, while the predetermined trigger events are indeed mapped to the emotion instruction, as bolded by Applicant’s emphasis, the adjusted object parameters and the adjustment amounts of the object parameters, and the now claimed script data to present a dynamic change of the three-dimensional face model are part of the “emotion data” not the “emotion instruction”. Rather, the emotion instruction is used to select the emotion data. Therefore, Applicant’s assertion that Van Os fails to teach the amended claim limitation because the predetermined trigger event does not correspond to the claimed script data is irrelevant, because the claim does not require that the emotion instruction includes the claimed script data.
It is further noted that the 11/21/25 Office Action rejection, following the end of Applicant’s citation, further explains that Van Os’ system modifies the display parameters of the avatar face elements according to the particular expression/emotion triggered by the predetermined event, corresponding to the claimed adjusting object parameters according to a piece of emotion data selected according to an emotion instruction. That is, each of Van Os’ emotions, e.g. happy, sad, angry, surprise, boredom, corresponds to a distinct dataset of animation data to be applied to the avatar face elements, i.e. the claimed emotion data. As further discussed in the 11/21/25 Office Action and in the above rejection, said expression/emotion data is used to determine the different avatar element animation parameters corresponding to the claimed adjustment amounts of the object parameters. Finally, as discussed in the above rejection, the same animations corresponding to different emotions/expressions can be applied to any user’s avatar, corresponding to the currently claimed script data to present a dynamic change of the three-dimensional face model, i.e. the data used to define an animation to be applied to an avatar’s object parameters is the claimed script data used to present a dynamic change. Therefore, this argument cannot be considered persuasive because Van Os’ different emotions/expressions include the claimed script data used to present a dynamic change of the three-dimensional face model.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT BADER/Primary Examiner, Art Unit 2611