Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to the application filed December 10, 2024. Claims 1-20 are pending and examined.
Specification
Applicant is required to update the status (pending, allowed, etc.) of all parent priority applications in the first line of the specification. The status of all citations of US filed applications in the specification should also be updated where appropriate.
Claim objections
Claims 5-8, 14-17, and 20 are objected to because of the following informalities: they depend from rejected claims. Appropriate correction is required.
Information Disclosure Statement
An initialed and dated copy of Applicant’s IDS form 1449 filed March 27, 2025, is attached to the instant Office action.
Double patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 11,605,464 (Grandparent of this application). Although the conflicting claims are not identical, they are not patentably distinct from each other because Patent claim 1 is application claims 1 and 5 which is one of its dependent claims. Patent claims 2-4 are identical to application claims 2-4. Patent claims 5-8 are identical to application claims 6-9. Patent claim 9 is application claims 10 and 14 which is one of its dependent claims. Patent claims 10-12 are identical to application claims 11-13. Patent claims 13-16 are identical to application claims 15-18. Patent claim 17 is application claims 19 and 20 which is one of it dependent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 9-13, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (U.S. Patent 10,628,528 B2) in view of Shi et al. (User Emotion Recognition based on Multi-Class Sensors of Smartphone).
As per claim 1 He teaches:
A computing system comprising:
one or more processors; (See at least He column 6 lines 53-61 The CPU) and
memory storing instructions (See at least He column 6 lines 53-61 the stored program instructions that are stored in the memory.) that, when executed by the one or more processors, cause the computing system to perform:
obtaining electronic data of a user; (See at least He column 7 lines 4-14 Obtains data for analysis)
determining input data for at least one machine learning model based on the electronic data of the user; (See at least He column 7 lines 21-30 Determines what data to consider.)
predicting, based on the input data and the at least one machine learning model, a mental state of the user, the mental state comprising a set of mood values, a set of uncertainty values, and a set of a magnitude values, each mood value of the set of mood values being associated with a corresponding uncertainty value of the set of uncertainty values and a corresponding magnitude value of the set of magnitude values, the corresponding magnitude value indicating a relative strength or weakness of the mood value associated with the corresponding uncertainty value and the corresponding magnitude value; (See at least He column 11 lines 20-32 “score on a continuum rating” is an uncertainty magnitude value column 7 lines 31-column 9 lines 37 Conducts machine learning determining how positive or negative statements are so they can be scored.)
selecting and arranging, by the computing system based on the predicted mental state of the user, a subset of graphical elements from a set of graphical elements, each graphical element of the set of graphical elements being associated with a corresponding mood value of the set of mood values, and each graphical element of the subset of graphical elements being associated with the predicted mental state of the user; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
facilitating presentation, via a graphical user interface (GUD), of the subset of graphical elements according to the selection and arrangement of the subset of graphical elements; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
receiving, in response to the user interacting with the GUI presenting the subset of graphical elements according to the selection and arrangement of the subset of graphical elements, a user selection of a particular graphical element of the subset of graphical elements; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report) and
facilitating presentation, via the GUI in response to the user selection, of the user selected graphical element of the subset of graphical elements. (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
While He does teach machine learning and providing certainty measures of predictions it is focused on establishing mood of reviews rather than more general users but Shi teaches applying the same kind of analysis to establish the mood of users based on the data that can be provided by their phones. (See at least Shi Abstract, page 480 Fig. 1 and the lines right below it lists emotions being classified “happiness, sadness, fear, anger and neutral”. Page 481 B. Data Collection “we determined to collect the following types of data: sensors data of motion, including accelerometers, gyroscopes, magnetometers and movement (distinguish user in steady, slow speed or fast speed state); sensors data of environment, including light sensors and GPS information; mobile phone usage data, including social records (calls, text messages, WeChat, QQ, Bluetooth record), phone usage records (WiFi, Application usage, unlock and lock the screen, camera), and records of mobile phone state (Phone mode, Whether or not connected to the Internet, Whether charges).”) Therefore it would have been obvious to a person of ordinary skill in the art at the time the invention was made to apply the same kind of the analysis to finding users mood since it had been done by Shi.
As per claims 2 and 11 Shi teaches finding the emotional state tied to time for the data. (see at least Shi page 481 C. Emotion Record ) Therefore it would have been obvious to a person of ordinary skill in the art at the time the invention was made since it was solving a known problem in a known way with an expectation of success.
As per claims 3 and 12 Shi the set of mood values (see at least Shi page 480 Fig. 1 and the lines right below it lists emotions being classified “happiness, sadness, fear, anger and neutral”.) Therefore it would have been obvious to a person of ordinary skill in the art at the time the invention was made since it was solving a known problem in a known way with an expectation of success.
As per claims 4 and 13 Shi teaches (See at least Shi Page 481 B. Data Collection “we determined to collect the following types of data: sensors data of motion, including accelerometers, gyroscopes, magnetometers and movement (distinguish user in steady, slow speed or fast speed state); sensors data of environment, including light sensors and GPS information; mobile phone usage data, including social records (calls, text messages, WeChat, QQ, Bluetooth record), phone usage records (WiFi, Application usage, unlock and lock the screen, camera), and records of mobile phone state (Phone mode, Whether or not connected to the Internet, Whether charges).”) Which is an inclusive list of data available from a mobile phone therefore it would have been obvious to a person of ordinary skill in the art at the time the invention was made since it was solving a known problem in a known way with an expectation of success.
As per claims 9 and 18 while He and Shi are not explicit about using emojis Appleton taught using them convey emotions in a Graphical User Interface. (see at least Appleton paragraphs 64-65 ) Therefore it would have been obvious to a person of ordinary skill in the art of Graphical User Interfaces to use emojis to convey emotions since it was solving a known problem in a known way with expectation of success.
As per claim 10 He teaches:
A method being implemented by a computing system including one or more physical processors (See at least He column 6 lines 53-61 The CPU) and storage media storing machine-readable instructions, (See at least He column 6 lines 53-61 the stored program instructions that are stored in the memory.) the method comprising:
obtaining electronic data of a user; (See at least He column 7 lines 4-14 Obtains data for analysis)
determining input data for at least one machine learning model based on the electronic data of the user; (See at least He column 7 lines 21-30 Determines what data to consider.)
predicting, based on the input data and the at least one machine learning model, a mental state of the user, the mental state comprising a set of mood values, a set of uncertainty values, and a set of a magnitude values, each mood value of the set of mood values being associated with a corresponding uncertainty value of the set of uncertainty values and a corresponding magnitude value of the set of magnitude values, the corresponding magnitude value indicating a relative strength or weakness of the mood value associated with the corresponding uncertainty value and the corresponding magnitude value; (See at least He column 11 lines 20-32 “score on a continuum rating” is an uncertainty magnitude value column 7 lines 31-column 9 lines 37 Conducts machine learning determining how positive or negative statements are so they can be scored.)
selecting and arranging, by the computing system based on the predicted mental state of the user, a subset of graphical elements from a set of graphical elements, each graphical element of the set of graphical elements being associated with a corresponding mood value of the set of mood values, and each graphical element of the subset of graphical elements being associated with the predicted mental state of the user; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
facilitating presentation, via a graphical user interface (GUD), of the subset of graphical elements according to the selection and arrangement of the subset of graphical elements; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
receiving, in response to the user interacting with the GUI presenting the subset of graphical elements according to the selection and arrangement of the subset of graphical elements, a user selection of a particular graphical element of the subset of graphical elements; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report) and
facilitating presentation, via the GUI in response to the user selection, of the user selected graphical element of the subset of graphical elements. (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
While He does teach machine learning and providing certainty measures of predictions it is focused on establishing mood of reviews rather than more general users but Shi teaches applying the same kind of analysis to establish the mood of users based on the data that can be provided by their phones. (See at least Shi Abstract, page 480 Fig. 1 and the lines right below it lists emotions being classified “happiness, sadness, fear, anger and neutral”. Page 481 B. Data Collection “we determined to collect the following types of data: sensors data of motion, including accelerometers, gyroscopes, magnetometers and movement (distinguish user in steady, slow speed or fast speed state); sensors data of environment, including light sensors and GPS information; mobile phone usage data, including social records (calls, text messages, WeChat, QQ, Bluetooth record), phone usage records (WiFi, Application usage, unlock and lock the screen, camera), and records of mobile phone state (Phone mode, Whether or not connected to the Internet, Whether charges).”) Therefore it would have been obvious to a person of ordinary skill in the art at the time the invention was made to apply the same kind of the analysis to finding users mood since it had been done by Shi.
As per claim 19 He teaches:
A non-transitory computer readable medium comprising instructions (See at least He column 6 lines 53-61 the stored program instructions that are stored in the memory.) that, when executed, cause one or more processors (See at least He column 6 lines 53-61 The CPU) to perform:
obtaining electronic data of a user; (See at least He column 7 lines 4-14 Obtains data for analysis)
determining input data for at least one machine learning model based on the electronic data of the user; (See at least He column 7 lines 21-30 Determines what data to consider.)
predicting, based on the input data and the at least one machine learning model, a mental state of the user, the mental state comprising a set of mood values, a set of uncertainty values, and a set of a magnitude values, each mood value of the set of mood values being associated with a corresponding uncertainty value of the set of uncertainty values and a corresponding magnitude value of the set of magnitude values, the corresponding magnitude value indicating a relative strength or weakness of the mood value associated with the corresponding uncertainty value and the corresponding magnitude value; (See at least He column 11 lines 20-32 “score on a continuum rating” is an uncertainty magnitude value column 7 lines 31-column 9 lines 37 Conducts machine learning determining how positive or negative statements are so they can be scored.)
selecting and arranging, by the computing system based on the predicted mental state of the user, a subset of graphical elements from a set of graphical elements, each graphical element of the set of graphical elements being associated with a corresponding mood value of the set of mood values, and each graphical element of the subset of graphical elements being associated with the predicted mental state of the user; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
facilitating presentation, via a graphical user interface (GUD, of the subset of graphical elements according to the selection and arrangement of the subset of graphical elements; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
receiving, in response to the user interacting with the GUI presenting the subset of graphical elements according to the selection and arrangement of the subset of graphical elements, a user selection of a particular graphical element of the subset of graphical elements; (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report) and
facilitating presentation, via the GUI in response to the user selection, of the user selected graphical element of the subset of graphical elements. (See at least He column 12 lines 31-38 Using a Graphical User Interface to display the report)
While He does teach machine learning and providing certainty measures of predictions it is focused on establishing mood of reviews rather than more general users but Shi teaches applying the same kind of analysis to establish the mood of users based on the data that can be provided by their phones. (See at least Shi Abstract, page 480 Fig. 1 and the lines right below it lists emotions being classified “happiness, sadness, fear, anger and neutral”. Page 481 B. Data Collection “we determined to collect the following types of data: sensors data of motion, including accelerometers, gyroscopes, magnetometers and movement (distinguish user in steady, slow speed or fast speed state); sensors data of environment, including light sensors and GPS information; mobile phone usage data, including social records (calls, text messages, WeChat, QQ, Bluetooth record), phone usage records (WiFi, Application usage, unlock and lock the screen, camera), and records of mobile phone state (Phone mode, Whether or not connected to the Internet, Whether charges).”) Therefore it would have been obvious to a person of ordinary skill in the art at the time the invention was made to apply the same kind of the analysis to finding users mood since it had been done by Shi.
Conclusion
Any inquiry concerning this communication from the examiner should be directed to Scott S. Trotter, whose telephone number is 571-272-7366. The examiner can normally be reached on 8:30 AM – 5:00 PM, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Gart, can be reached on 571-272-3955.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
The fax phone number for the organization where this application or proceeding is assigned are as follows:
(571) 273-8300 (Official Communications; including After Final Communications labeled “BOX AF”)
(571) 273-7366 (Draft Communications)
/SCOTT S TROTTER/Primary Examiner, Art Unit 3696