DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 6-18 are objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim should refer to other claims in the alternative only, and cannot depend from any other multiple dependent claim. See MPEP § 608.01(n). Accordingly, the claims 6-18 have not been further treated on the merits.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 5/3, 5/4, 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ezrielev et al. (US Patent 11,893,152 B2).
As to claim 1, Ezrielev teaches a system for encoding an emotional reaction of a user to an input virtual environment sensorially experienced by the user (i.e. as seen in figures 2 and 10 embodiments, Ezrielev shows a virtual environment image being sensed by the user and a sentiment/engage detection of the user from the analytics engine 250 based on user sensor data) (see Fig. 2 and 10, Col. 10, Lines 1-26) comprising:
an emotional data determining unit configured to determine emotional response data relating to an emotional response of the user to the input virtual environment (i.e. the element analytic engine 250 is configured to determine user emotional response data related to the user data) (see Fig. 2 and 10, Col. 9, Line 20-Col. 10, Line 12);
an anchor determining unit configured to determine at least one anchor within the input virtual environment to which the emotional data is attributable (i.e. the facial data with anchor points to aid video sentiment/engagement analysis is based on interactive emotional Dyadic motion capture (IEMOCAP) database) (see Fig. 10, Col. 15 Line 58-Col. 16, Line 35);
an emotion encoding unit configured to generate an emotionally encoded representation of the input virtual environment (i.e. the system of figure 10 shows the video sentiment/engagement analyzer 1040 element tracks the emotion encoded representation of the input virtual environment response) (see Fig. 10, Col. 15 Line 58-Col. 16, Line 35), whereby the emotionally encoded representation of the virtual environment is encoded with data relating to the emotional response of the user to the input virtual environment, based on the input virtual environment, the emotional response data and the at least one anchor (i.e. as seen in figure 2 and 10 the virtual environment system of Ezrielev is designed to track the emotional response of engagement level of the user based on the video sensors according to the user feedback) (see Fig. 10, Col. 15, Line 58, Col. 16, Line 35).
As to claim 19, Ezrielev discloses a method of encoding an emotional reaction of a user to an input virtual environment sensorially experienced by the user (i.e. as seen in figures 2 and 10 embodiments, Ezrielev shows a virtual environment image being sensed by the user and a sentiment/engage detection of the user from the analytics engine 250 based on user sensor data) (see Fig. 2 and 10, Col. 10, Lines 1-26), comprising:
determining emotional response data relating to an emotional response of the user to the input virtual environment (i.e. the element analytic engine 250 is configured to determine user emotional response data related to the user data) (see Fig. 2 and 10, Col. 9, Line 20-Col. 10, Line 12);
determining at least one anchor within the input virtual environment to which the emotional data is attributable (i.e. the facial data with anchor points to aid video sentiment/engagement analysis is based on interactive emotional Dyadic motion capture (IEMOCAP) database) (see Fig. 10, Col. 15 Line 58-Col. 16, Line 35);
generating an emotionally encoded representation of the input virtual environment (i.e. the system of figure 10 shows the video sentiment/engagement analyzer 1040 element tracks the emotion encoded representation of the input virtual environment response) (see Fig. 10, Col. 15 Line 58-Col. 16, Line 35), whereby the emotionally encoded representation of the virtual environment is encoded with data relating to the emotional response of the user to the input virtual environment, based on the input virtual environment, the emotional response data and the at least one anchor (i.e. as seen in figure 2 and 10 the virtual environment system of Ezrielev is designed to track the emotional response of engagement level of the user based on the video sensors according to the user feedback) (see Fig. 10, Col. 15, Line 58, Col. 16, Line 35).
As to claim 2, Ezrielev teaches the system of claim 1, wherein the emotionally encoded representation of the input virtual environment is visually encoded with data relating to the emotional response of the user to the input virtual environment (i.e. the video based processing system of figure 10 is a visually encoding of the input virtual environment which creates the user engagement reaction) (see Fig. 10, Col. 15, Line 58, Col. 16, Line 35).
As to claim 3, Ezrielev teaches the system of claim 2, wherein the encoded data is configured to be visually decodable by a second user (i.e. as seen in figure 13 and abstract the system of Ezrielev uses multiple user for tracking a historical conversation database 1310 that is able to track separate users) (see Abstract, Col. 21, Line 47-Col. 22, Line 38).
As to claim 4, Ezrielev teaches the system of claim 3, wherein the encoded data represents the emotional response of the user to the input virtual environment using variation in colour (i.e. as seen in figure 13 embodiment the color of virtual object is modify in the detect of sentiments and/or engagement) (Fig. 13, Col. 19, Lines 1-6).
As to claim 5/3, Ezrielev teaches the system of claim 3, wherein the encoded data represents the emotional response of the user to the input virtual environment using a heat map (i.e. as seen in figure 3 the session leader may see a heatmap indicated the focus of the session participant which is a visualization of the emotional response of the user to the input virtual environment) (see Fig. 3, Col. 15, Lines 50-58).
As to claim 5/4, Ezrielev teaches the system of claim 4, wherein the encoded data represents the emotional response of the user to the input virtual environment using a heat map (i.e. as seen in figure 3 the session leader may see a heatmap indicated the focus of the session participant which is a visualization of the emotional response of the user to the input virtual environment) (see Fig. 3, Col. 15, Lines 50-58).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The prior art Rabinovich et al. (US Pub: 2020/0111262 A1) is cited to teach another type emotional detection system for a virtual realized based system as seen in figures 1-7 embodiments.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALVIN C. MA whose telephone number is (571)270-1713. The examiner can normally be reached 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin C. Lee can be reached on 571-272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CALVIN C MA/Primary Examiner, Art Unit 2693 November 1, 2025