Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is responsive to the applicants’ Amendment filed on September 8, 2025, in which claims 1-20 are pending.
Response to Amendment
Applicant has amended claim 1; and claims 1-20 are pending.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Amended claim 1, line 14 recites ”…and/or …” Examiner is unclear as to the metes and bounds of the limitation. Is it Applicant' s intent that the animation or simulation of the user’s lip-sync, body movement, or both.
The dependent claims 1-20 included in the statement of rejection but not specifically addressed in the body of the rejection have inherited the deficiencies of their parent claim and have not resolved the deficiencies. Therefore, they are rejected based on the same rationale as applied to their parent claims above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Siddique et al. (US 2013/0215116 A1) hereinafter “Siddique” in view of Riahi et al. (US 2014/0314225 A1) hereinafter “Riahi”.
As to claim 1, Siddique discloses a method for chatting with user profile, which is available to chat while a user is not available online or not willing to chat, on a social media network, the social media network represents a network of various user profiles owned by their users wherein the user profiles are connected to each other with various level of relationship or non-connected, and the user profile comprising an image having face of the user (Siddique, Abstract [104-105, 112], discloses online apparel modeling and collaboration system comprising generating, viewing and editing three-dimensional models of users, wherein the user models may then be used to model items of apparel, sharing experiences and products manifestations on social networks), the method comprising:
-receiving a user request to chat with one of a user profile on the social media network, wherein a handler of the user profile is not available to chat or not connected to a person making the user request in the social media network, and wherein the user request is an audio, visual, text, or combination thereof (Siddique [112-114, 115-116, 118-119], discloses reconstruction engine to generate a three-dimensional model of the user and apply it to the virtual model, wherein the model can be dressed up in apparel, make-up and hairstyles as desired by the user and involved in interaction with other user, and the user is able to request and receive information such as discussions/chats/real-time interaction),
-processing the user request using Artificial Intelligence based Learning engine using a user profile initial information, and optionally a user profile activity information and generating a display information (Siddique [102, 115-118, 130], discloses machine language, pattern analysis and machine intelligence, for generating three-dimensional models with learned speech/dialogues and interactions between the user models),
wherein the displaying information comprises a video or animation or simulation of the user, showing the user’s face performing lip-sync and/or one or more body movement, whereas, the handler of the user profile is capable of chatting with the person making the user request (Siddique [108, 112-114], discloses user profile created and associated with the three dimensional model, growing/shrinking regions based on extracted features of the face, photorealistic modeling of apparel permitting life-like simulation, and [114, 118-120] user profile/model is capable of interacting/chatting with the requesting user by inviting users to participate in synchronized sessions for sharing videos, and other multimedia),
wherein the user profile initial information is an information provided while creating the user profile on the social media network or updated in the user profile or information as per data form (Siddique [104-105, 118, 131-133], discloses creating/generating a user profile associated with social network),
wherein the user profile activity information is an information derived from various activities carried out by the user through its user profile on the social media network, wherein the user profile activity information comprises at least one of relationship information between the user profiles, contents posted using the user profile, sharing of contents posted by other user profiles, annotating of contents posted by user profile, or combination thereof (Siddique [104-108, 115-118], discloses sharing user activities like game-play, singing, dancing,, and additionally discloses the system friendship manager is used to manage a user's relationship with other users).
Siddique is silent on wherein the user profile activity information comprises at least one of relationship information between the user profiles.
However Riahi ([144-147, 151-153], discloses wherein the live intelligent agent pool administration module maintains contact information of members of an online social community and their activities through an online social community module, wherein further discloses the profiles of the virtual agents are maintained in the same live agent database as the regular live agents and the virtual agents could be available for shifts as a temporary assignment, and further discloses wherein the automated agent may create a social community for customers, by taking advantage of customer's natural tendencies to socialize as well as to share information with those in similar situations, the automated agent may organize customers experiencing similar challenges or concerns into a social community).
Siddique and Riahi are analogous art because they are from the same field of endeavor, namely, systems and methods of collaborative experience over the internet. before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Siddique and Riahi before him or her, to modify the online collaborative model of Siddique to include the system of Riahi with reasonable expectation that this would result in a system that utilizes users activities information including relationship information. This method of improving the system of Siddique was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Riahi. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Siddique with Riahi to obtain the invention as specified in claim 1.
As to claim 2, Siddique-Riahi discloses the method according to the claim 1, wherein the user profile initial information comprises various piece of information, and at least one piece of information is provided by the user in audio format (Riahi [66-67, 134-135], discloses wherein intelligent automated agent may refer to any computer-implemented entity that carries out the role of the agent in certain capacities, including voice/audio, such as chat or instant messaging). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 3, Siddique-Riahi discloses the method according to the claim 1, wherein the user profile initial information comprises various piece of information, and at least one piece of information is mapped with a particular facial expression and/or body part/s movement (Riahi [100-103, 105-107], discloses wherein sampling from persons of the same culture as the customer, as expressions and body language may vary by culture, their gestures or other visual cues, voice patterns characteristic). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 4, Siddique-Riahi discloses the method according to the claim 1, wherein the user profile initial information comprises various piece of information, and each piece of information is mapped to a privacy level selected from a set of privacy level (Riahi [135, 150-153], discloses wherein the intelligent automated agent according to one exemplary embodiment of the present invention. In one embodiment, the intelligent automated agent may be instantiated at the enterprise level, such as enterprise automated agents 720, 730, and 740, and each enterprise automated agent, using enterprise automated agent 720 as an example, may be responsible for the automated agent functions of an entire enterprise). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 5, Siddique-Riahi discloses the method according to the claim 1, wherein the image comprises at least one more body part except face of the user, and the user profile initial information comprises various piece of information, and at least one piece of information is linked to a body movement, wherein the body movement is movement of at least one of the body part other than face as provided in the image of the user (Riahi [100-104], discloses wherein customer emotion and mood detection module 230 of FIG. 5 may include a voice analysis module 1710 for analyzing customer voice samples, a speech analysis module 1720 for analyzing customer speech, such as words and phrases spoken by the customer, and a visual cue analysis module 1730 for analyzing customer gestures and other visual cues for detecting possible customer emotions, moods, sentiments, etc., as expressed to the contact center, to the automated agent's avatar during an interaction). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 6, Siddique-Riahi discloses the method according to the claim 1, wherein the user request is a chat request made by user of one user profile to at least one of another user profiles, the method comprises receiving conversation input comprising at least text or audio, or combination thereof, from either of the user profile, and processing the conversation input and the image of the user profile to provide the display output showing the user with at least voice, lipsing, facial expression, or body movement, or combination thereof (Riahi [100-104, 107-108], discloses wherein customer emotion and mood detection module 230 of FIG. 5 may include a voice analysis module 1710 for analyzing customer voice samples, a speech analysis module 1720 for analyzing customer speech, such as words and phrases spoken by the customer, and a visual cue analysis module 1730 for analyzing customer gestures and other visual cues for detecting possible customer emotions, moods, sentiments, etc., as expressed to the contact center, to the automated agent's avatar during an interaction). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 7, Siddique-Riahi discloses the method according to the claim 6 comprising: processing image of each of the user profile in conversation based on the chat request and generating an environment image showing face of each of the user profile, processing the conversation input and the environment image, and generating the display output showing the users in conversation with at least one of the user with at least voice, lipsing, facial expression, or body movement, or combination thereof (Riahi [100-104, 107-108], discloses wherein customer emotion and mood detection module 230 of FIG. 5 may include a voice analysis module 1710 for analyzing customer voice samples, a speech analysis module 1720 for analyzing customer speech, such as words and phrases spoken by the customer, and a visual cue analysis module 1730 for analyzing customer gestures and other visual cues for detecting possible customer emotions, moods, sentiments, etc., as expressed to the contact center, to the automated agent's avatar during an interaction). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 6 above.
As to claim 8, Siddique-Riahi discloses the method according to the claim 1, wherein the displaying information is a video or animation showing the user in two dimension or three dimension (Riahi [65, 95-97], discloses audio and video avatars interacting with the customer). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 9, Siddique-Riahi discloses the method according to the claim 1, comprising: extracting at least one of facial features and body features from the image of the user profile; processing the extracted features to enact the display information (Riahi [100-104, 107-108], discloses wherein customer emotion and mood detection module 230 of FIG. 5 may include a voice analysis module 1710 for analyzing customer voice samples, a speech analysis module 1720 for analyzing customer speech, such as words and phrases spoken by the customer, and a visual cue analysis module 1730 for analyzing customer gestures and other visual cues for detecting possible customer emotions, moods, sentiments, etc., as expressed to the contact center, to the automated agent's avatar during an interaction). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 10, Siddique-Riahi discloses the method according to claim 1, comprising: receiving a wearing input related to a body part of the user in the image of the user profile onto which a fashion accessory is to be worn; processing the wearing input and identifying body part/s of the user onto which the fashion accessory is to be worn; receiving an image/video of the accessory according to the wearing input; processing the identified body part/s the user and the image/video of the accessory and generating a view showing the user wearing the fashion accessory (Riahi [95-97, 107-108], discloses wherein intelligent automated agent/avatar for voice and/or video communication, wherein the customer emotion and mood detection module 230 of FIG. 5 may include a voice analysis module 1710 for analyzing customer voice samples, a speech analysis module 1720 for analyzing customer speech, such as words and phrases spoken by the customer, and a visual cue analysis module 1730 for analyzing customer gestures and other visual cues for detecting possible customer emotions, moods, sentiments, etc., as expressed to the contact center, to the automated agent's avatar during an interaction). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 18, Siddique-Riahi discloses the method according to the claim 1, comprising: receiving a target image showing a face of another person or animal, processing the user image and the target image to generate a morphed image showing the face from the target image on the user's body from the image of the user (Riahi [97-101], discloses wherein a video generation module 1640 for generating the visual appearance of the avatar). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
As to claim 19, Siddique-Riahi discloses the method according to the claim 1, comprising: receiving a message from at least one of the users of the social media network, wherein the message comprises at least one of a text, a voice and a smiley, or combination thereof. processing the message to extract or receive an audio data related to voice of the user, and a facial movement data related to expression to be carried on face of the user, processing the image of the user, the audio data, and the facial movement data, and generating an animation of the user enacting the message (Riahi [100-104, 107-108], discloses wherein customer emotion and mood detection module 230 of FIG. 5 may include a voice analysis module 1710 for analyzing customer voice samples, a speech analysis module 1720 for analyzing customer speech, such as words and phrases spoken by the customer, and a visual cue analysis module 1730 for analyzing customer gestures and other visual cues for detecting possible customer emotions, moods, sentiments, etc., as expressed to the contact center, to the automated agent's avatar during an interaction). The Examiner supplies the same rationale for the combination of references Siddique and Riahi as in claim 1 above.
Allowable Subject Matter
Claims 11-17, 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments with respect to claims 1-20 have been considered, However, upon further consideration, a new ground(s) of rejection is made in view of Siddique et al. US 20130215116 A1 and in further view of Riahi et al. US 20140314225 A1.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Form 892.
Correspondence Information
The rejections are based upon the broadest reasonable interpretation of the claims. Applicant is advised that the specified citations of the relied upon prior art, in the above rejections, are only representative of the teachings of the prior art, and that any other supportive sections within the entirety of the reference (including any figures, incorporation by references, claims and/or priority documents) is implied as being applied to teach the scope of the claims.
Applicant may not introduce any new matter to the claims or to the specification. For any subsequent response that contains new/amended claims, Applicant is required to cite its corresponding support in the specification. (See MPEP chapter 2163.03 section (I.) and chapter 2163.04 section (I.) and chapter 2163.06)
accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Razu A Miah whose telephone number is (571)270-5433. The examiner can normally be reached M-F, 9-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wing Chan can be reached at 27493. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAZU A MIAH/Primary Examiner, Art Unit 2441