DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In view of the Pre-Appeal Brief filed on 5/12/2025, PROSECUTION IS HEREBY REOPENED. A new ground(s) of rejection are set forth below.
According to paper filed March 4th 2025, claims 1-23 are pending for examination with a July 10th 2019 priority date under 35 USC §119(e).
By way of the present Amendment, claims 11 and 19 are amended. Claims 1-10, 12, 14-15, and 18 are previously canceled. No claim is added.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. §112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. §112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 11, 13, and 16-17 are rejected under 35 U.S.C. §112(a) or 35 U.S.C. §112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. §112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
In claim 11, the newly amended feature of “using the artificial intelligence to learn to read the user more effectively” is not described in the Specification of the present invention publication US2021/0011614. What is the claimed “artificial intelligence”. Paragraph [0143] of said publication spells out “[t]he system may use artificial intelligence (AI) to help the system learn to read the user more effectively”, which is almost identical to the claim recitations without further description regarding the “artificial intelligence”.
The following is a quotation of 35 U.S.C. §112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. §112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11, 13, and 16-17 are rejected under 35 U.S.C. §112(b) or 35 U.S.C. §112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. §112, the applicant), regards as the invention.
The term “to read the user more effectively” in claim 11 is a relative term which renders the claim indefinite. The term “more effectively” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Lack of description given for the recited “artificial intelligence”, said feature is therefore construed and cited as “machine learning” algorithm utilized in the present invention to help reading and analyzing user inputs, then determining the users mood in the present Office action until further clarification provided.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §102 and §103 (or as subject to pre-AIA 35 U.S.C. §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. §103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the
examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. §102(b)(2)(C) for any potential 35 U.S.C. §102(a)(2) prior art
against the later invention.
Claims 11, 13, and 16 are rejected under 35 U.S.C. §103 as being unpatentable over Shevchenko et al. (US 10,594,757), hereinafter Shevchenko, and further in view of French et al. (US 2012/0089705), hereinafter French, Woiceshyn et al. (US 2020/0302310), hereinafter Woiceshyn, and Sivadas (US 2012/0011477), hereinafter Sivadas.
Claim 11
“identifying a first mood of a plurality of moods of a user of the computing experience on a user computing device based on image and biometric data of the user using a camera of the user computing device to capture an image of the user” Shevchenko col.57 lines 38-43 teaches “[t]he AIA may detect a user’s emotional or physiological state to correlate with their current writing style, such as through biometric data from the user (e.g., from a wearable device), visual indicators (e.g., from facial indicators analyzed from a user-facing camera, such as mounted on a user’s laptop or smartphone)”, and “identifying psycho-emotional state (e.g., emotions, mood, stress level, and the like)” is spelled out in Shevchenko col.74 lines 18-19;
“monitoring user metrics including typing speed and response time of the user, and using artificial intelligence to compare the captured image and user metrics of the user to a database of previously captured images and user metrics” French [0046] teaches application performance metrics that includes combination of the application response time, which inherently teaches the typing speed, and Shevchenko col.57 lines 38-43 also teaches biometric data and voice indicators that measurements and changes in data may be compared to an emotional profile, such as stored in the user’s profile, then determine if there are adversely affection the user’s communications;
“using the artificial intelligence to learn to read the user more effectively” Woiceshyn [0050] teaches the action generation module analyzes audio characteristics using machine-learning algorithms to determine the user mood state,
“using metrics generated by the user when confirming or correcting an identified first mood and previously captured images, the user metrics providing calibration information to the system, and thereby automatically identifying the first mood of the user” Woiceshyn [0017][0027] teaches collecting environment data that provides metrics about a surrounding environment, and an action generation module that can detect a change in user mood, such as comparing the current user state with the user history data and determine that a user mood indicated by the current user state has changed from a prior user mood indicated by the user history data;
“automatically identifying modification to the first computing experience of the user to change the first computing
experience to the second computing experience; the modification comprising one or more modifications in the color, audio, pace, and performance of the first computing experience on the user computer device to achieve the second computing experience” Sivadas [0004][0005]-[0007] teaches determining an emotional or physical condition of a user of a device, and changing either a setting of a user interface of the device or information presented through the user interface depending on the detected emotional or physical condition;
“the user computing device automatically modifying the color, audio, pace, and performance of the first computing experience on the user computing device of the user based in a series of steps of the identified modifications to achieve the second computing experience and thereby change the user to a second mood that is selected by the user” Woiceshyn [0032][0068] teaches modifying the audio level based on a change in user mood, and the server computing device can communicate the action suggestion from the network system to the computing device, the action suggestion can be displayed for the user in a device user interface, when the network system detects a change in user mood, then generate action suggestions communicate to the user of the computing device, the color change is inherently taught in the user interface change.
Shevchenko, French, Woiceshyn, and Sivadas disclose analogous art. However, Shevchenko does not spell out the “user metrics including typing speed” and “artificial intelligence” and “color changing based on user’s mood” as recited above. These features are taught in French, Woiceshyn and Sivadas respectively. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said features of French and Woiceshyn and Sivadas into Shevchenko to enhance its user’s mood modification functions.
Claim 13
“wherein the first mood of the user is identified by having the user select a setting on an on-screen display”
Shevchenko col.71 line 66 to col.72 line 4 teaches “[t]he AIA can then present the transformed version to the reader based on the preferences. … user preferences may enable the selection of other functions …, be able to highlight fragments and react with an emoji or provide a comment”.
Claim 16
“wherein the computing experience comprises a search” Shevchenko col.60 line 14 teaches “web search”.
Claim 17 is rejected under 35 U.S.C. §103 as being unpatentable over Shevchenko et al. (US 10,594,757), hereinafter Shevchenko, French et al. (US 2012/0089705), hereinafter French, and Woiceshyn et al. (US 2020/0302310), hereinafter Woiceshyn, Sivadas (US 2012/0011477), hereinafter Sivadas, and further in view of Cramer (US 2015/0081691), hereinafter Cramer.
Claim 17
“wherein the modification to the computing experience comprises modifying search results of the search” Cramer claim 20 recites “for dynamically modifying search results… compiling information to infer user inter based on analyzing the content of at least one object the user selects or skips; before display of objects beyond said first portion of the search result objects, re-ranking a second portion of the set of objects based on said user inferred intent”.
Shevchenko, French, Woiceshyn, Sivadas, and Cramer disclose analogous art. However, Shevchenko does not spell out the “modifying search results based on user’s mood” as recited above. Said feature is taught in Cramer. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said features of Cramer into Shevchenko to enhance its function of modification the search results based on user’s mood.
Claims 19-21 and 23 are rejected under 35 U.S.C. §103 as being unpatentable over Shevchenko et al. (US 10,594,757), hereinafter Shevchenko, in view of French et al. (US 2012/0089705), hereinafter French, and Woiceshyn et al. (US 2020/0302310), hereinafter Woiceshyn, Sivadas (US 2012/0011477), hereinafter Sivadas, and further in view of Bill (US 2006/0143647), hereinafter Bill.
Claim 19
“informing the user through a confirmation interface of the identified first mood and requesting the user to confirm
that the first mood is accurate” Bill [0011] teaches prompting users to confirm that a mood determined by evaluating an image relates to an actual mood;
“providing a manual mood selection interface to allow the user to select the second mood, which is different from the first mood” Woiceshyn [0050] teaches receiving the input, implementing an action generation module to determine a current user state;
“providing the confirmation interface to the user to confirm that the user is in the second mood” Bill [0011] teaches prompting users to confirm that a mood determined by evaluating an image relates to an actual mood. Claim 19 is also rejected for the rationale given for claim 11.
Shevchenko, French, Woiceshyn, Sivadas, and Bill disclose analogous art. However, Shevchenko does not spell out the “confirmation on user’s mood” as recited above. Said feature is taught in Bill. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said features of Bill into Shevchenko to enhance its information presenting functions based on user’s mood.
Claim 20
“identifying a series of steps to change the first color to the second color, the first audio to the second audio and the first pace to the second pace” Woiceshyn [0068] teaches modifying the audio level based on a change in user mood, and French [0049] teaches audio emotional state detecting system, and the pace change
is also taught in French [0046].
Claim 21
“wherein the user can indicate their mood through the confirmation interface without the step of
automatically identifying the first mood” Bill [0011] teaches prompting users to confirm that a mood
determined by evaluating an image relates to an actual mood.
Claim 23
Claim 23 is rejected for the similar rationale given for claim 11.
Claim 22 is rejected under 35 U.S.C. §103 as being unpatentable over Shevchenko et al. (US 10,594,757), hereinafter Shevchenko, in view of French et al. (US 2012/0089705), hereinafter French, Woiceshyn et al. (US 2020/0302310), hereinafter Woiceshyn, Sivadas (US 2012/0011477), hereinafter Sivadas, and further in view of Bill (US 2006/0143647), hereinafter Bill, and Cramer (US 2015/0081691), hereinafter Cramer.
Claim 22
“the user conducting a search and modifying search results based on the second mood of the user, wherein the search results are different from a search during the first mood” Cramer claim 20 recites “for dynamically modifying search results… compiling information to infer user inter based on analyzing the content of at least one object the user selects or skips; before display of objects beyond said first portion of the search result objects, re-ranking a second portion of the set of objects based on said user inferred intent”.
Shevchenko, French, Woiceshyn, Sivadas, Bill, and Cramer disclose analogous art. However, Shevchenko does not spell out the “modifying search results based on user’s mood” as recited above. Said feature is taught in Cramer. Hence, it would have been obvious to one ordinary skilled in the art at the time the present invention was made to incorporate said features of Cramer into Shevchenko to enhance its function of modification the search results based on user’s mood.
Response to Arguments
Applicant's arguments filed March 4th 2025 have been fully considered but they are not persuasive.
Applicant argues about the “artificial intelligence to learn to read the user more effectively”
feature, said feature is construed and cited as “machine learning” algorithm in the present Office action in light of the rejections under 35 USC §112(a).
Artificial Intelligence represents a broad spectrum of research fields. This claimed “AI” is indefinite because no description is given, for example, machine learning models or deep learning algorithms. Neither description given in the Specification of the present invention nor explanation is provided by the applicant through responses to the Office. Accordingly, applicant’s arguments regarding the 35 USC §112(a) and 35 USC §112(b) rejections are not persuasive.
Further, applicant argues that “the Examiner has not cited any prior art reference for the element of color in this claim element. Therefore, the combination does not render the claim unpatentable.” Said argument is not persuasive because various interface displays is disclosed in the cited references, features like changing font and color are inherently disclosed. User interface changes are also inherently indicating font and color changes. Nevertheless, a newly cited reference, Sivadas, is applied in the present Office action, wherein changing the user interface setting based on user’s mood is spelled out.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUAY HO whose telephone number is (571)272-6088; RightFax number is (571) 273-6088. The examiner can normally be reached on Monday to Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes, can be reached on 571-270-6001. The fax phone number for the organization where
this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair.
Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ruay Ho/Primary Patent Examiner, Art Unit 2175