Prosecution Insights
Last updated: April 19, 2026
Application No. 18/986,539

System and Method for Analyzing and Predicting Emotion Reaction

Final Rejection §101§103§DP
Filed
Dec 18, 2024
Examiner
ANSARI, AZAM A
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Emm Innovations LLC
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
162 granted / 338 resolved
-4.1% vs TC avg
Strong +50% interview lift
Without
With
+49.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
38 currently pending
Career history
376
Total Applications
across all art units

Statute-Specific Performance

§101
34.2%
-5.8% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Response to Amendment This action is in response to the response to the amendment filed on 02/11/2026. Claim 1 has been amended. Claims 1-8 are pending and currently under consideration for patentability. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Inventorship This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-8 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 11,068,926. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in U.S. Patent No. 11,068,926 recite the entirety of limitations of claims 1-8 of the instant application. For example, in the instant application independent claims 1 and 8 are anticipated by claim 1 of U.S. Patent No. 11,068,926 because claim 1 of U.S. Patent No. 11,068,926 recites additional features such as “clustering content item by identifying correlations of content item characteristics in relation to user emotion reaction according to the said emotion statistics analysis; performing NLP or sentiment analysis of user emotion reaction; and estimating optimal content items publication exposure time period which determines how much time to keep each content item exposed to the users, based on emotion responses analysis in relation to content item characteristics, wherein the content items management is based on said estimation, wherein the exposure time period and publication time determine automatic scheduling of publishing of each content item including start and end time of publishing each content item which is optimized based on the content type and sentiment analysis of content and emotion responses of the users” wherein in the instant application claims 1 and 8 do not recite these features and are essentially broader than claim 1 of U.S. Patent No. 11,068,926. Therefore claim 1 of U.S. Patent No. 11,068,926 is in essence a “species” of the generic invention of the instant application claims 1 and 8. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Claims 2-7 (Dependent on Claim 1) do not cure the deficiencies of the independent claims. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more. Step 1: In a test for patent subject matter eligibility, claims 1-8 are found to be in accordance with Step 1 (see 2019 Revised Patent Subject Matter Eligibility), as they are related to a process, machine, manufacture, or composition of matter. Claims 1-8 recite a method. When assessed under Step 2A, Prong I, they are found to be directed towards an abstract idea. The rationale for this finding is explained below: With respect to Claims 1-7: Step 2A, Prong I: Under Step 2A, Prong I, independent claims 1 and 8 are directed to an abstract idea without significantly more, as they all recite a judicial exception. Claim 1 recites limitations directed to the abstract idea including displaying a plurality of selectable emotion icons, each of the plurality of emotion icons corresponding to an emotion reaction associated with a given first content item displayed simultaneously with the plurality of emotion icons; receiving indications of a plurality of user selections from a plurality of users, wherein each user is associated with a user profile, and wherein each user selection comprises a selection of at least one of the emotion icons corresponding to an emotion reaction associated with the displayed first content item; processing the plurality of user selections of the emotion icons, wherein processing includes identifying time, user identifier, and user geographic location for each of the plurality of user selections of emotion icons; determining an amount of use of the plurality of emotion icons that tracks a frequency with which the plurality of emotion icons have been selected over a period of time; and creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons, the visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons. These further limitations are not seen as any more than the judicial exception. Claim 1 recites additional limitations including on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; using a processor operating software; and “stores the determined amount of use in memory”. The claims are considered to be an abstract idea under certain methods of organizing human activity because the claims are directed to commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) such as creating a visual presentation of emotion reactions based on received user’s selections of emotion icons corresponding to emotion reactions or in other words receiving data, analyzing the data, and presenting results according to that data. The claims are also considered to be an abstract idea under mental processes because the claims are directed to concepts performed in the human mind (including an observation, evaluation, judgment, opinion) such as receiving data (i.e. user selections of emotional icons corresponding to emotional reactions), processing the data (i.e. identifying time/location and an amount of use for each emotional icon), and presenting results according to data (i.e. creating visual presentation of emotion reactions). Therefore, under Step 2A, Prong I, claim 1 is directed towards an abstract idea. Step 2A, Prong II: Step 2A, Prong II is to determine whether any claim recites any additional element that integrate the judicial exception (abstract idea) into a practical application. Claim 1 recites additional limitations including on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; using a processor operating software; and “stores the determined amount of use in memory”. The limitations reciting – “on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; and using a processor operating software” are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. GUI/memory/processors, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2A, Prong II, this claim remains directed towards an abstract idea. Step 2B: Claim 1 recites additional limitations including on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; using a processor operating software; and “stores the determined amount of use in memory”. The limitations reciting – “on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; and using a processor operating software” do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. Claim 1 also recites additional limitations “stores the determined amount of use in memory”. Merely, storing data (i.e. determined amount of use) in memory is seen as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). The courts have recognized that “Storing and retrieving information in memory” is seen as a well‐understood, routine, and conventional computer function (See: Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;). Claim 1 does not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe a “general purpose computer”, ¶ [0075], for implementing the GUI/memory/processor, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. Furthermore, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Under Step 2B in a test for patent subject matter eligibility, this claim is not patent eligible. Dependent claims 2-7 further recite the method of claim 1. Dependent claims 2-7 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation fail to establish that the claims are not directed to an abstract idea: Under Step 2A, Prong I, these additional claims only further narrow the abstract idea set forth in claim 1. For example, claims 2-7 describe the limitations for creating a visual presentation of emotion reactions based on received user’s selections of emotion icons corresponding to emotion reactions – which is only further narrowing the scope of the abstract idea recited in the independent claims. Under Step 2A, Prong II, for dependent claims 2-7, there are no additional elements introduced. Thus, they do not present integration into a practical application, or amount to significantly more. Under Step 2B, the dependent claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Additionally, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. As discussed above with respect to integration of the abstract idea into a practical application, the additional claims do not provide any additional elements that would amount to significantly more than the judicial exception. Under Step 2B, these claims are not patent eligible. With respect to Claim 8: Step 2A, Prong I: Under Step 2A, Prong I, independent claims 1 and 8 are directed to an abstract idea without significantly more, as they all recite a judicial exception. Claim 8 recites limitations directed to the abstract idea including receiving a plurality of emotion reactions originated by different users relating to a content item through time presented to the users, wherein the receiving of the user emotion reactions is achieved by monitoring user behavior or by receiving feedback from the user who can select one or more emotion icons, wherein for each user is associated a personal profile, the personal profile comprising information identifying the user; analyzing statistics of reactions of users in relation to characteristics of the content item, updating a personal profile associated with each user based on the received emotion reactions with respect to the content item; identifying content item characteristics according to the statistics of reactions; and managing content item publication, including by at least one determining a publication schedule for the content item throughout a periodic time based on the emotion responses of a plurality of users or the identified characteristics, and determining exposure time of the content item on the communication network based on the emotion responses of the plurality of users or the identified characteristics. These further limitations are not seen as any more than the judicial exception. Claim 8 recites additional limitations including through a graphical user interface; within the communication network; on the communication network. The claims are considered to be an abstract idea under certain methods of organizing human activity because the claims are directed to commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) such as managing content item publication by determining a publication schedule for the content item throughout a periodic time based on the emotion responses of a plurality of users or the identified characteristics, and determining exposure time of the content item based on the emotion responses of the plurality of users or the identified characteristics. The claims are also considered to be an abstract idea under mental processes because the claims are directed to concepts performed in the human mind (including an observation, evaluation, judgment, opinion) such as receiving data (i.e. user selections of emotion icons corresponding to emotion reactions), analyzing data (i.e. statistics of reactions of users with respect to their selection of emotion icons and identifying content item characteristics), and managing data (i.e. content item publication with respect to schedule time and exposure time). Therefore, under Step 2A, Prong I, claim 8 is directed towards an abstract idea. Step 2A, Prong II: Step 2A, Prong II is to determine whether any claim recites any additional element that integrate the judicial exception (abstract idea) into a practical application. Claim 8 recites additional limitations including through a graphical user interface; within the communication network; on the communication network. The limitations reciting – “through a graphical user interface; within the communication network; on the communication network” are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. GUI and communication network, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2A, Prong II, this claim remains directed towards an abstract idea. Step 2B: Claim 8 recites additional limitations including through a graphical user interface; within the communication network; on the communication network. The limitations reciting – “through a graphical user interface; within the communication network; on the communication network” do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. Claim 8 does not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe a “general purpose computer”, ¶ [0075], for implementing the GUI and memory, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. Furthermore, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Under Step 2B in a test for patent subject matter eligibility, this claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication 2014/0108309 to Frank in view of U.S. Publication 2014/0195610 to Ken. With respect to Claim 1: Frank teaches: A method comprising the steps of: displaying a plurality of selectable emotion icons on a graphical user interface (GUI) of a computer system, each of the plurality of emotion icons corresponding to an emotion reaction associated with a given first content item displayed simultaneously with the plurality of emotion icons (i.e. user is provided with voting mechanism comprising of emotion icons via GUI, wherein the emotion icons or votes/likes correspond to an emotion reaction associated with content during the user’s interaction with the content) (Frank: ¶ [0035] “There are various types of voting mechanisms a user 114 may utilize when casting a vote regarding a segment of content consumed by the user 114. In one example, the user 114 may express that he/she likes or dislikes content, e.g., that allows the user to push "like" or "dislike" buttons, up-vote or down-vote buttons, and/or a"+ 1" button. In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items ( e.g., arrange the items from most to least liked).” Furthermore, as cited in ¶ [0039] “In one embodiment, a label, such as the label 610, 610a, and/or 610b, may be indicative of an emotional response, likely felt by the user 114, which is related to a vote. For example, if the vote is positive such as a "like", an upvote, a "+1", or high rating (e.g., 5/5 stars), the label may represent a positive emotional response such as content, excitement, and/or happiness. If the vote is negative such as a "dislike", down-vote, low rating, and/or low ranking, the label may represent a negative emotional response, such as discontent, boredom, and/or uneasiness.”); receiving, via the GUI, indications of a plurality of user selections from a plurality of users, wherein each user is associated with a user profile stored in memory (i.e. receive votes/likes corresponding to emotion reactions, wherein likes/votes are associated with a user’s social network profile stored in memory) (Frank: ¶ [0035] “There are various types of voting mechanisms a user 114 may utilize when casting a vote regarding a segment of content consumed by the user 114. In one example, the user 114 may express that he/she likes or dislikes content, e.g., that allows the user to push "like" or "dislike" buttons, up-vote or down-vote buttons, and/or a"+ 1" button. In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items ( e.g., arrange the items from most to least liked).” Furthermore, as cited in ¶ [0267] “The social network 602 may involve various environments. In some embodiments, the social network 602 is an environment, such as a website, application, and/or virtual world, which users may access in order to consume content. Optionally, users accessing the social network 602 may be considered members of the social network and/or assigned usernames, profiles, and/or avatars that represent their identity in the social network 602.” Furthermore, as cited in ¶ [0296] “In one embodiment, the sensor 456 is coupled to, and/or has access to, memory storage capable of buffering measurements for a duration before they are transmitted. For example, the memory may be sufficient to buffer measurements for a duration of 100 milliseconds, 1 second, 1 minute, 1 hour, and/or 1 day. Thus, upon receiving a request to transmit measurements taken during a period that has already passed, at least in part, the sensor may transmit measurements stored in the memory.”), and wherein each user selection comprises a selection of at least one of the emotion icons corresponding to an emotion reaction associated with the displayed first content item (i.e. each user selection/vote corresponds to emotion reaction with respect to displayed first content) (Frank: ¶ [0273] “In one embodiment, the voting mechanism 604 may include one or more of the following: a like voting mechanism, in which the user indicates a positive attitude towards a content item (e.g., by pushing an appropriate button); a dislike voting mechanism, in which the user indicates a negative attitude towards a content item (e.g., by pushing an appropriate button); a star rating mechanism, in which the user indicates how much he likes an item by giving indicating the number of start the user would like to assign to the item; a numerical rating mechanism, in which the user assigns an item with a score (typically the higher the score, the stronger the indication that the user liked the item); an up voting mechanism, in which the user indicates that he/she likes an item ( e.g., by pressing an upward pointed arrow next to the item); a down voting mechanism, in which the user indicates that he/she dislikes an item ( e.g., by pressing a downward pointed arrow next to the item); and a ranking mechanism, in which the user may change the order and/or rank of one or more items to reflect the order of preference the user has for the items being ranked.”); processing the plurality of user selections of the emotion icons, wherein processing includes identifying time, user identifier, and user geographic location for each of the plurality of user selections of emotion icons (i.e. processing user’s feedback to the content or user selections of emotion icons includes identifying user’s duration of interaction with content or time, user’s location during user’s interaction or user’s geographic location, and user’s identify via social network profile) (Frank: ¶ [0387] “In one embodiment, a gaze-based attention level of the user 114 to a segment is computed by providing one or more of the data described in the aforementioned examples ( e.g., values related to direction and/or duration of gaze, pupil size), are provided to a function that computes a value representing the gaze-based attention level. For example, the function may be part of a machine learning predictor ( e.g., neural net, decision tree, regression model). Optionally, computing the gaze-based attention level may rely on additional data extracted from sources other than eye tracking. In one example, values representing the environment are used to predict the value, such as the location (at home vs. in the street), the number of people in the room with the user (if alone it is easier to pay attention than when with company), and/or the physiological condition of the user (if the user is tired or drunk it is more difficult to pay attention).” Furthermore, as cited in ¶ [0288] “For example, a vote in which a user up-votes a segment of content expresses that the user likes the content. However, if other users who also indicated that they liked the same, and/or similar content, had increased heart-rate and increased skin conductivity that may be associated with being frightened, it may indicate that a more accurate label for the emotional response may be "frightened".” Furthermore, as cited in ¶¶ [0267] [0268] “Optionally, users accessing the social network 602 may be considered members of the social network and/or assigned usernames, profiles, and/or avatars that represent their identity in the social network 602. Additionally or alternatively, users of the social network 602 may communicate with each other in the social network 602…In some embodiments, the user 114 may post content on the social network 602. The posted content may be viewed by the user 114, a subset of users on the social network 602, such as acquaintances of the user on the social network (e.g., Facebook™ friends), members of the social network 602, and/or users that may not be members of the social network 602.” Furthermore, ¶ [0179] further indicates that the gaze analyzer is utilized for user’s voting selection.); determining an amount of use of the plurality of emotion icons using a processor operating software that tracks a frequency with which the plurality of emotion icons have been selected over a period of time and stores the determined amount of use in memory (i.e. determining a pattern of gaze which tracks the frequency in which the user has interacted with the content in a positive or negative manner over time and storing the measurements in memory) (Frank: ¶ [0385] “In yet another example, a gaze-based attention level of the user 114 to a segment of content may be computed, at least in part, based on a pattern of gaze direction of the user 114 during a certain duration. For example, if the user gazes away from the content many times, during the duration, that may indicate that there were distractions that made it difficult for the user 114 to pay attention to the segment. Thus, the gaze-based attention level of the user 114 to the segment may be inversely proportional to the number of times the user 114 changed the direction at which the user 114 gazed, e.g., looking and looking away from the content), and/or the frequency at which the user looked away from the content.” Furthermore, as cited in ¶ [0318] “Thus, in some cases, the window may end slightly after the vote is cast, however the majority of the window may correspond to a period before the vote is cast. Since a large part, if not all, of the window falls before the vote is cast, the measurements of affective response of the user 114 need to be stored in a memory ( e.g., the memory 508).”); and creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons, […] (i.e. presenting statistics of measurement data via fixed or sliding windows) (Frank: ¶ [0363] “In yet another example, statistics are extracted from the measurement values, such as statistics of the minimum, maximum, and/or various moments of the distribution, such as the mean, variance, or skewness. Optionally, the statistics are computed for measurement data that includes time-series data, utilizing fixed or sliding windows.”). Frank does not explicitly disclose the visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons. However, Ken further discloses the visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons (i.e. present report of statistical data including feedback statistics of comments) (Ken: Fig. 28 and ¶ [0086] “FIG. 26 is an example of react comments statistics according to some embodiments of the invention. For each comment the user can view the reactions statistics according to it's classification.” Furthermore, as cited in ¶ [0088] “FIG. 28 is an example of statistical reports according to some embodiments of the invention. These statistical reports are intended for the system moderator.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Ken’s visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons to Frank’s creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons. One of ordinary skill in the art would have been motivated to do so in order to “provide a system for selecting and organizing multimedia content within network page, related to reactions of at least one user.” (Ken: ¶ [0011]). With respect to Claim 2: Frank teaches: The method of claim 1, wherein creating a visual presentation comprises displaying a plurality of visual elements associated with the plurality of selected emotion icons (i.e. displaying star visual elements associated with emotion icons) (Frank: ¶ [0035] “There are various types of voting mechanisms a user 114 may utilize when casting a vote regarding a segment of content consumed by the user 114. In one example, the user 114 may express that he/she likes or dislikes content, e.g., that allows the user to push "like" or "dislike" buttons, up-vote or down-vote buttons, and/or a"+ 1" button. In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items ( e.g., arrange the items from most to least liked).” Furthermore, as cited in ¶ [0039] “In one embodiment, a label, such as the label 610, 610a, and/or 610b, may be indicative of an emotional response, likely felt by the user 114, which is related to a vote. For example, if the vote is positive such as a "like", an upvote, a "+1", or high rating (e.g., 5/5 stars), the label may represent a positive emotional response such as content, excitement, and/or happiness. If the vote is negative such as a "dislike", down-vote, low rating, and/or low ranking, the label may represent a negative emotional response, such as discontent, boredom, and/or uneasiness.”). With respect to Claim 3: Frank teaches: The claim method of claim 2, wherein creating a visual presentation comprises displaying a numerical value adjacent to the plurality of visual elements associated with the plurality of selected emotion icons (i.e. displaying a numerical value corresponding to emotion icon) (Frank: ¶ [0035] “There are various types of voting mechanisms a user 114 may utilize when casting a vote regarding a segment of content consumed by the user 114. In one example, the user 114 may express that he/she likes or dislikes content, e.g., that allows the user to push "like" or "dislike" buttons, up-vote or down-vote buttons, and/or a"+ 1" button. In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items ( e.g., arrange the items from most to least liked).” Furthermore, as cited in ¶ [0039] “In one embodiment, a label, such as the label 610, 610a, and/or 610b, may be indicative of an emotional response, likely felt by the user 114, which is related to a vote. For example, if the vote is positive such as a "like", an upvote, a "+1", or high rating (e.g., 5/5 stars), the label may represent a positive emotional response such as content, excitement, and/or happiness. If the vote is negative such as a "dislike", down-vote, low rating, and/or low ranking, the label may represent a negative emotional response, such as discontent, boredom, and/or uneasiness.”). With respect to Claim 4: Frank does not explicitly disclose the method of claim 2, wherein creating a visual presentation comprises displaying a first visual element associated with a first emotion icon having a greater determined amount of use in a left position, and displaying a second visual element associated with a second emotion icon having a lesser determined amount of use than the first emotion icon in a position to the right of the first visual element. However, Ken further discloses displaying a first visual element associated with a first emotion icon having a greater determined amount of use in a left position, and displaying a second visual element associated with a second emotion icon having a lesser determined amount of use than the first emotion icon in a position to the right of the first visual element (i.e. displaying a comments/ratings associated with emoticon, wherein the more popular comment/rating is displayed in a more prominent position, such as top or left position) (Ken: ¶ [0058] “Based on this analysis is preformed customization of content objects and comments which include (216a) (i) selection an filtering related content including content objects ( such as article, image, video), comments, advertising (218A), the content may be internal of the relevant web page or external from other websites, the selection and filtering is correlated user emotional attitude as reflected by his definitions and reaction (ii) determining design layout by defining organization and relative location of each visual comments presentation(220A). For example a smiley icon or emoticon may be next to the user text or the size of the callout may change according viewing statistic, or color according to most user emotional reaction, the layout may reflect popularity of a comment or geographic association between the respective users.” Furthermore, as cited in ¶ [0099] “According to the some embodiments of the present invention each user can accumulate points for each of his comments or actions when using the commenting system. Points can be accumulated thanks to actions that are being done by the user itself (like: writing comments, responding to other users comments, giving points or voting in the system mechanism (Funny, maddening etc) to other users comments) or by other users that writes comment on his comments or recommend or act on each of his comments. Points in this service can differentiate between regular services and premium services, while premium services may have new design and look, better representations locations (placing the comment up front) or the like. Higher points may benefit the user with higher permissions for the user.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Ken’s displaying a first visual element associated with a first emotion icon having a greater determined amount of use in a left position, and displaying a second visual element associated with a second emotion icon having a lesser determined amount of use than the first emotion icon in a position to the right of the first visual element to Frank’s creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons. One of ordinary skill in the art would have been motivated to do so in order to “provide a system for selecting and organizing multimedia content within network page, related to reactions of at least one user.” (Ken: ¶ [0011]). With respect to Claim 5: Frank teaches: The method of claim 1 further comprising storing an indication of a user’s selection of a given emotion icon in a profile associated with the user (i.e. storing user’s likes/ratings via social media profile) (Frank: ¶ [0035] “There are various types of voting mechanisms a user 114 may utilize when casting a vote regarding a segment of content consumed by the user 114. In one example, the user 114 may express that he/she likes or dislikes content, e.g., that allows the user to push "like" or "dislike" buttons, up-vote or down-vote buttons, and/or a"+ 1" button. In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items ( e.g., arrange the items from most to least liked).” Furthermore, as cited in ¶ [0267] “The social network 602 may involve various environments. In some embodiments, the social network 602 is an environment, such as a website, application, and/or virtual world, which users may access in order to consume content. Optionally, users accessing the social network 602 may be considered members of the social network and/or assigned usernames, profiles, and/or avatars that represent their identity in the social network 602.” Furthermore, as cited in ¶ [0296] “In one embodiment, the sensor 456 is coupled to, and/or has access to, memory storage capable of buffering measurements for a duration before they are transmitted. For example, the memory may be sufficient to buffer measurements for a duration of 100 milliseconds, 1 second, 1 minute, 1 hour, and/or 1 day. Thus, upon receiving a request to transmit measurements taken during a period that has already passed, at least in part, the sensor may transmit measurements stored in the memory.”). With respect to Claim 6: Frank teaches: The method of claim 5 further comprising selecting a second content item for display to the user based at least in part on the user’s selection of the emotion icon correlated with a positive emotion reaction for the first content item (i.e. selecting content based on user’s like or selection of emotion icon) (Frank: ¶ [0425] “The social network may also utilize the feedback provided by detected emotional response in order to train a more accurate predictor of emotional response to content (a content ERP) than could be trained based on the limited feedback explicitly expressed by the user ( e.g., via a "like" button). The more thoroughly trained content ERP may be utilized to select and/or adapt content for the user both from the social network, and from other sources such as selecting content from news sites, or adapting the behavior of characters in a virtual world visited by the user.”). With respect to Claim 7: Frank teaches: The method of claim 1 further comprising increasing at least one of a frequency with which the first content item is displayed, or a duration of time the first content item is displayed based on the plurality of user selections of the emotion icons (i.e. increasing window of voting for first content or duration of content is based on emotion icons/user interaction) (Frank: ¶ [0313] “This evaluation is performed to determine whether the duration is short enough so that the user likely felt a single dominant emotional response to the segment. Optionally, the vote analyzer is configured to receive a characteristic of the segment, and to estimate whether the user consumed the segment in a duration that is short enough based on the characteristic of the segment. For example, the characteristic may be indicative of statistics such as the length of the segment, volume of sound, and/or size of images. In another example, the characteristic may be indicative of the type and/or genre of the content. In yet another example, the characteristic may include a portion of the segment of content.” Furthermore, as cited in ¶ [0208] “In one embodiment, characteristics of the content that are indicative of a length and/or intensity of an emotional response of the user to the content may influence the selection of the length of a window. For example, the user may be consuming personal content to which the user has a strong emotional connection ( e.g., viewing old pictures of family on a tablet). Given that the user may be nostalgic, it is likely that the user may feel an intense emotional response, and possibly for a longer duration, compared to when the user views ordinary images; such as images of cats on a website-unless the user has very strong feeling towards cats, in that case this example may be reversed. Thus, in the case that the user has strong nostalgic feelings, the window selector may tend to select longer windows, such as when the user views ten year old family pictures, compared to the length of windows selected when the user views images of internet memes involving cats ( or vice versa).”). With respect to Claim 8: Frank teaches: A method for managing content item publication within a communication network, said method implemented by one or more processing devices operatively coupled to a non-transitory storage device, on which are stored modules of instruction code that when executed cause the one or more processing devices to perform the following steps (Frank: ¶¶ [0430]-[0432]): receiving a plurality of emotion reactions originated by different users relating to a content item through time presented to the users through a graphical user interface (i.e. user provides rating/emotion reaction to content item via interface) (Frank: ¶ [0035] “There are various types of voting mechanisms a user 114 may utilize when casting a vote regarding a segment of content consumed by the user 114. In one example, the user 114 may express that he/she likes or dislikes content, e.g., that allows the user to push "like" or "dislike" buttons, up-vote or down-vote buttons, and/or a"+ 1" button. In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items ( e.g., arrange the items from most to least liked).” Furthermore, as cited in ¶ [0039] “In one embodiment, a label, such as the label 610, 610a, and/or 610b, may be indicative of an emotional response, likely felt by the user 114, which is related to a vote. For example, if the vote is positive such as a "like", an upvote, a "+1", or high rating (e.g., 5/5 stars), the label may represent a positive emotional response such as content, excitement, and/or happiness. If the vote is negative such as a "dislike", down-vote, low rating, and/or low ranking, the label may represent a negative emotional response, such as discontent, boredom, and/or uneasiness.”), wherein the receiving of the user emotion reactions is achieved by monitoring user behavior or by receiving feedback from the user who can select one or more emotion icons (i.e. emotion reactions are received either explicitly through user’s voting/rating or via sensors that monitor user behavior) (Frank: ¶¶ [0035] [0036] “In another example, the user 114 may cast a vote by providing an explicit rating (e.g., entering a star rating such as on a scale from 1 to 5 stars). In yet another example, the user may cast a vote via a voting mechanism that allows the user to rank or order content items (e.g., arrange the items from most to least liked). Optionally, a voting mechanism 604 comprises on or more of the aforementioned methods for voting. Optionally, the voting mechanism 604 belongs to the social network 602. Optionally, the voting mechanism 604 is offered substantially independently of whether the votes are used to trigger the sensor to acquire affective response measurements; for example, by measuring the user with the sensor. Optionally, the acquired measurements may be considered measurements of naturally expressed affective response since the user was not requested to supply them, rather, the affective response in this case is a natural physiological and/or behavioral product…In one embodiment, voting may involve a microphone and/or a camera that may analyze the users reactions to the content to detect explicit cues that may be considered votes. It is to be noted, that the user in this embodiment is aware that he/she is being monitored by a system that may interpret behavior as votes on content. Optionally, the system comprising a microphone and/or a camera are part of the voting mechanism 604.”), wherein for each user is associated a personal profile, the personal profile comprising information identifying the user within the communication network (i.e. each user is associated with a social network profile comprising username identifying the user within the communication network) (Frank: ¶ [0267] “The social network 602 may involve various environments. In some embodiments, the social network 602 is an environment, such as a website, application, and/or virtual world, which users may access in order to consume content. Optionally, users accessing the social network 602 may be considered members of the social network and/or assigned usernames, profiles, and/or avatars that represent their identity in the social network 602.”); analyzing statistics of reactions of users in relation to characteristics of the content item, […] (i.e. analyzing user’s ratings/likes in relation to content item) (Frank: ¶ [0424] “The user's interaction with the social network may be monitored by an interaction analyzer. The interaction may receive information that includes a description of aspects of the user's interaction with the social network. The description may enable the interaction analyzer to identify, in certain instances, an action that cause a deviation from a typical progression of presentation of content. In one example, the interaction analyzer may be a program that runs on servers controlled, at least in part, by the social network. In another example, the interaction analyzer may be a program that runs, at least in part, on hardware that belongs and/or is controlled by the user and/or runs on a cloud-based server.” Furthermore, as cited in ¶ [0267] [268] “The social network 602 may involve various environments. In some embodiments, the social network 602 is an environment, such as a website, application, and/or virtual world, which users may access in order to consume content. Optionally, users accessing the social network 602 may be considered members of the social network and/or assigned usernames, profiles, and/or avatars that represent their identity in the social network 602. Additionally or alternatively, users of the social network 602 may communicate with each other in the social network 602…In some embodiments, the user 114 may post content on the social network 602. The posted content may be viewed by the user 114, a subset of users on the social network 602, such as acquaintances of the user on the social network (e.g., Facebook™ friends), members of the social network 602, and/or users that may not be members of the social network 602. Additionally or alternatively, the user 114 may consume content on the social network 602, such as content posted by users of the social network 602, content made available by the operators of the social network 602, and/or content from an external source.”); identifying content item characteristics according to the statistics of reactions (i.e. identifying characteristics of content according to statistics of emotional response) (Frank: ¶ [0146] “The vote is provided via a voting mechanism 604 that belongs to the social network 602 ( e.g., the vote is a "like" or star rating for the segment of content). Additionally, the vote analyzer 682 is configured to receive a characteristic of the segment of content. The characteristic may be received from the voting mechanism 604 and/or the social network 602. In one example, the characteristic of the segment of content describes an attribute such as the duration the user consumed the segment of content, the type of content, and/or the expected emotional response of the user or other users to the segment of content.”); and managing content item publication on the communication network, including by at least one determining a publication schedule for the content item throughout a periodic time based on the emotion responses of a plurality of users or the identified characteristics, and determining exposure time of the content item on the communication network based on the emotion responses of the plurality of users or the identified characteristics (i.e. managing content based on user’s like or selection of emotion icon, wherein managing includes determining a duration time for the content and an exposure window to receive interactions for the content based on emotional responses) (Frank: ¶ [0425] “The social network may also utilize the feedback provided by detected emotional response in order to train a more accurate predictor of emotional response to content (a content ERP) than could be trained based on the limited feedback explicitly expressed by the user ( e.g., via a "like" button). The more thoroughly trained content ERP may be utilized to select and/or adapt content for the user both from the social network, and from other sources such as selecting content from news sites, or adapting the behavior of characters in a virtual world visited by the user.” Furthermore, as cited in ¶ [0313] “This evaluation is performed to determine whether the duration is short enough so that the user likely felt a single dominant emotional response to the segment. Optionally, the vote analyzer is configured to receive a characteristic of the segment, and to estimate whether the user consumed the segment in a duration that is short enough based on the characteristic of the segment. For example, the characteristic may be indicative of statistics such as the length of the segment, volume of sound, and/or size of images. In another example, the characteristic may be indicative of the type and/or genre of the content. In yet another example, the characteristic may include a portion of the segment of content.” Furthermore, as cited in ¶ [0208] “In one embodiment, characteristics of the content that are indicative of a length and/or intensity of an emotional response of the user to the content may influence the selection of the length of a window. For example, the user may be consuming personal content to which the user has a strong emotional connection ( e.g., viewing old pictures of family on a tablet). Given that the user may be nostalgic, it is likely that the user may feel an intense emotional response, and possibly for a longer duration, compared to when the user views ordinary images; such as images of cats on a website-unless the user has very strong feeling towards cats, in that case this example may be reversed. Thus, in the case that the user has strong nostalgic feelings, the window selector may tend to select longer windows, such as when the user views ten year old family pictures, compared to the length of windows selected when the user views images of internet memes involving cats ( or vice versa).”). Frank does not explicitly disclose updating a personal profile associated with each user based on the received emotion reactions with respect to the content item. However, Ken further discloses updating a personal profile associated with each user based on the received emotion reactions with respect to the content item (i.e. updating user’s profile with reaction characteristics/comments) (Ken: ¶ [0057] “All user actions and reactions are aggregated by the visual comments server (207A) for analyzing user feedback comments/messages characteristics based on reactions and/or commenting user definitions (208A). The analyzing include at least one of the following: calculating viewing statics and feedback statistics of comments and reactions (210A), analyzing feedback comments and/or reaction characteristics for updating users profiles (212A), NLP and sentiment analysis of comments text and analyzing user profile, specifically user emotions in relation to articles and comments (214) which may include emotion vector based comments and reaction analysis, gender based, geographic location, age, Etc.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Ken’s updating a personal profile associated with each user based on the received emotion reactions with respect to the content item to Frank’s managing content item publication on the communication network, including by at least one determining a publication schedule for the content item throughout a periodic time based on the emotion responses of a plurality of users or the identified characteristics, and determining exposure time of the content item on the communication network based on the emotion responses of the plurality of users or the identified characteristics. One of ordinary skill in the art would have been motivated to do so in order to “provide a system for selecting and organizing multimedia content within network page, related to reactions of at least one user.” (Ken: ¶ [0011]). Response to Arguments Applicant’s arguments see page 11 of the Remarks disclosed, filed on 12/05/2025, with respect to the nonstatutory double patenting rejection(s) of claim(s) 1-21 have been considered but are not persuasive. The Applicant asserts “Applicant will address this rejection by filing a terminal disclaimer upon indication of otherwise allowable claims. Applicant respectfully requests that this rejection be held in abeyance until the claims are otherwise found allowable.” Therefore, the rejection(s) of claim(s) 1-8 under the nonstatutory double patenting rejection has been maintained. Applicant’s arguments see pages 5-7 of the Remarks disclosed, filed on 02/11/2026, with respect to the 35 U.S.C. § 101 rejection(s) of claim(s) 1-8 have been considered but are not persuasive: The Applicant asserts “Step 2A, Prong I: The claims are not directed to an abstract idea. The amended claim 1 now requires processing that includes identifying all three of a time, user identifier, and user geographic location for each user selection of emotion icons. This is not a mental process that can be performed in the human mind. A human cannot practically identify user identifiers linked to stored user profiles, extract time stamps, and determine geographic locations for each of a plurality of user selections of emotion icons received from a plurality of users through a graphical user interface. The combination of these specific data processing requirements is inherently rooted in computer technology and goes well beyond what could be characterized as organizing human activity or mental processes.” The Examiner respectfully disagrees. A human is able to process user selections by identifying data such as time, user identifier, or user location. Furthermore, Claim 1 recites limitations directed to the abstract idea including displaying a plurality of selectable emotion icons, each of the plurality of emotion icons corresponding to an emotion reaction associated with a given first content item displayed simultaneously with the plurality of emotion icons; receiving indications of a plurality of user selections from a plurality of users, wherein each user is associated with a user profile, and wherein each user selection comprises a selection of at least one of the emotion icons corresponding to an emotion reaction associated with the displayed first content item; processing the plurality of user selections of the emotion icons, wherein processing includes identifying time, user identifier, and user geographic location for each of the plurality of user selections of emotion icons; determining an amount of use of the plurality of emotion icons that tracks a frequency with which the plurality of emotion icons have been selected over a period of time; and creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons, the visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons. The Applicant also asserts “Step 2A, Prong II: Even assuming arguendo that the claims recite an abstract idea, the claims recite a specific and practical integration. The claims define a particular technological implementation: displaying emotion icons on a GUI simultaneously with content, receiving user selections through that GUI, processing those selections to extract time, user identifier, and geographic location for each selection, tracking frequency of use over a period of time, and creating a visual presentation of the resulting statistical data. This ordered combination of steps provides a specific improvement to the functioning of content management systems by enabling emotion-based analytics that would not be possible without the claimed computer-implemented process. The claims are analogous to the claims found eligible in DDR Holdings, LLC V. Hotels.com, L.P., 773 F.3d 1245 (Fed. Cir. 2014), where the court found eligibility because the claims addressed a challenge particular to the Internet and provided a solution rooted in computer technology.” The Examiner respectfully disagrees. Claim 1 recites additional limitations including on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; using a processor operating software; and “stores the determined amount of use in memory”. The limitations reciting – “on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; and using a processor operating software” are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. GUI/memory/processors, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). The Applicant finally asserts “Step 2B: Furthermore, the claims recite significantly more than any alleged abstract idea. The specific combination of: (1) displaying a plurality of selectable emotion icons simultaneously with content on a GUI; (2) receiving user selections from multiple users, each associated with stored user profiles; (3) processing each selection to identify all three of time, user identifier, and geographic location; (4) tracking frequency of emotion icon selections over time via processor-operated software; and (5) creating a visual presentation of statistical data regarding emotion icon usage patterns - represents a specific, ordered combination that amounts to significantly more than merely collecting and analyzing data. The specification describes a particular technological architecture for emotion-based content analytics that goes beyond generic computer implementation.” The Examiner respectfully disagrees. Claim 1 recites limitations directed to the abstract idea including displaying a plurality of selectable emotion icons, each of the plurality of emotion icons corresponding to an emotion reaction associated with a given first content item displayed simultaneously with the plurality of emotion icons; receiving indications of a plurality of user selections from a plurality of users, wherein each user is associated with a user profile, and wherein each user selection comprises a selection of at least one of the emotion icons corresponding to an emotion reaction associated with the displayed first content item; processing the plurality of user selections of the emotion icons, wherein processing includes identifying time, user identifier, and user geographic location for each of the plurality of user selections of emotion icons; determining an amount of use of the plurality of emotion icons that tracks a frequency with which the plurality of emotion icons have been selected over a period of time; and creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons, the visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons. These further limitations are not seen as any more than the judicial exception. The claims are considered to be an abstract idea under certain methods of organizing human activity because the claims are directed to commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) such as creating a visual presentation of emotion reactions based on received user’s selections of emotion icons corresponding to emotion reactions or in other words receiving data, analyzing the data, and presenting results according to that data. The claims are also considered to be an abstract idea under mental processes because the claims are directed to concepts performed in the human mind (including an observation, evaluation, judgment, opinion) such as receiving data (i.e. user selections of emotional icons corresponding to emotional reactions), processing the data (i.e. identifying time/location and an amount of use for each emotional icon), and presenting results according to data (i.e. creating visual presentation of emotion reactions). Furthermore, Claim 1 recites additional limitations including on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; using a processor operating software; and “stores the determined amount of use in memory”. The limitations reciting – “on a graphical user interface (GUI) of a computer system; via the GUI; stored in memory; and using a processor operating software” do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. Claim 1 also recites additional limitations “stores the determined amount of use in memory”. Merely, storing data (i.e. determined amount of use) in memory is seen as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). The courts have recognized that “Storing and retrieving information in memory” is seen as a well‐understood, routine, and conventional computer function (See: Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;). Claim 1 does not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe a “general purpose computer”, ¶ [0075], for implementing the GUI/memory/processor, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. There is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Therefore, the rejection(s) of claim(s) 1-8 under 35 U.S.C. § 101 is maintained above. Applicant’s arguments see pages 7-10 of the Remarks disclosed, filed on 02/11/2026, with respect to the 35 U.S.C. § 103 rejection(s) of claim(s) 1-8 over Frank in view of Ken have been considered but are not persuasive: The Applicant asserts “Frank does not teach this feature. Frank discloses generic voting mechanisms including like/dislike buttons, up-vote/down-vote buttons, star ratings on a scale of 1 to 5, and ranking mechanisms for ordering content items (¶ [0035]). These are binary approval indicators or single-axis intensity scales - not a plurality of distinct emotion icons each corresponding to a qualitatively different emotion reaction. A like/dislike button allows a user to express positive or negative sentiment as a binary choice; a star rating allows the user to express an intensity of a single sentiment dimension. Neither provides a selection among multiple, qualitatively different emotion reactions. The Examiner relied on Frank's disclosure at 1 [0039] where labels "may be indicative of an emotional response." However, these labels are system-generated outputs derived from the system's analysis of user voting behavior and sensor data - they are not selectable emotion icons displayed on a GUI for user input. The direction of information flow is fundamentally reversed: in the present claims, the user actively selects from among displayed emotion icons; in Frank, the system infers emotional labels from user behavior. This is a critical distinction that the Office Action does not address.” The Examiner respectfully disagrees. Examiner notes that the Applicant agrees that the Frank reference teaches “like/dislike buttons, up-vote/down-vote buttons, star ratings on a scale of 1 to 5” voting mechanisms (See ¶ [0035]). Examiner notes that a “like/dislike” or “up vote/down” or even ratings from 1 to 5 reads on “a plurality of distinct emotion icons each corresponding to a qualitatively different emotion reaction” because more than one option reads on plurality of distinct options and like/dislike or ratings from 1 to 5 reads on a correspondence/indication of an emotional reaction. The Applicant also asserts “The Examiner mapped this limitation to Frank's gaze-based attention tracking and sensor measurements (¶ [0387]). However, Frank's processing relates to computing gaze-based attention levels from eye-tracking data - specifically, measuring gaze direction, duration, and pupil size to determine an attention level. This is fundamentally different from identifying time, user identifier, and user geographic location for each discrete user selection of an emotion icon. Specifically, Frank fails to disclose the conjunctive requirement of identifying all three of: (1) a time for each user's selection of a specific emotion icon; (2) a user identifier for each such selection linked to a stored user profile; and (3) a user geographic location for each such selection. Frank's gaze tracking processes physiological sensor data to determine attention patterns - it does not identify and record the time, identity, and location metadata for each discrete selection of an emotion icon from a GUI. These are entirely different data types (physiological measurements VS. selection metadata) processed for entirely different purposes (attention level computation vs. emotion selection analytics).” The Examiner respectfully disagrees. Frank discloses identifying user’s duration of interaction with content which reads on time, user’s location during user’s interaction which reads on user’s geographic location, and user’s identify via social network profile which reads on user’s identifier linked to a stored user profile. Furthermore, ¶ [0179] further indicates how the gaze tracking process is utilized for user’s voting selection of icons. The Applicant finally asserts “First, Frank and Ken operate on fundamentally different paradigms. Frank's core innovation is measuring implicit affective responses through physiological sensors (heart rate, skin conductivity, gaze tracking). Ken's system relates to explicit user commenting and reaction systems on content. Combining selected features from these disparate systems requires the use of the present claims as a blueprint, which constitutes impermissible hindsight reconstruction. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Second, Ken's statistical reports (Fig. 28, ¶ [0088]) are moderator-facing administrative tools "intended for the system moderator." They are backend reports for analyzing comments and reactions, not user-facing visual presentations of aggregated emotion icon usage data as recited in the claims. The claimed visual presentation is for presenting emotion reaction patterns to users, which is qualitatively different from Ken's administrative reports. Third, the motivation to combine stated by the Examiner - to "provide a system for selecting and organizing multimedia content within network page, related to reactions of at least one user" (Ken: ¶ [0011]) - is impermissibly generic. This broad statement does not provide a specific, articulated reason why a person of ordinary skill would combine Ken's moderator- facing comment statistics with Frank's sensor-based emotional response system to arrive at the claimed invention. The motivation must be specific and not merely a broad platitude. See KSR Int'l Co. V. Teleflex Inc., 550 U.S. 398, 418 (2007) (requiring an articulated reasoning with rational underpinning).” The Examiner respectfully disagrees. Frank also discloses an explicit user commenting and reaction systems on content combined with measuring implicit affective responses through physiological sensors, Frank and Ken are completely analogous. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Ken’s visual presentation comprising at least a visual representation of statistical data or patterns regarding the amount of use of the plurality of emotion icons to Frank’s creating a visual presentation of emotion reactions using at least the determined amount of use of each of the plurality of emotion icons. One of ordinary skill in the art would have been motivated to do so in order to “provide a system for selecting and organizing multimedia content within network page, related to reactions of at least one user.” (Ken: ¶ [0011]). Furthermore, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413,208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091,231 USPQ 375 (Fed. Cir. 1986). The Examiner would also like to note that the claims make no distinction between presenting emotion reaction patterns to users vs an administrative reports or that the visual presentations are user-facing. Therefore, the rejection(s) of claim(s) 1-8 under 35 U.S.C. § 103 is provided above. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. The following reference are cited to further show the state of the art: U.S. Publication 2014/0323817 to Kaliouby for disclosing The mental state of an individual is obtained in order to generate an emotional profile for the individual. The individual's mental state is derived from an analysis of the individual's facial and physiological information. The emotional profile of other individuals is correlated to the first individual for comparison. Various categories of emotional profiles are defined based upon the correlation. The emotional profile of the individual or group of individuals is rendered for display, used to provide feedback and to recommend activities for the individual, or provide information about the individual. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Azam Ansari, whose telephone number is (571) 272-7047. The examiner can normally be reached from Monday to Friday between 8 AM and 4:30 PM. If any attempt to reach the examiner by telephone is unsuccessful, the examiner's supervisor, Waseem Ashraf, can be reached at (571) 270-3948. Another resource that is available to applicants is the Patent Application Information Retrieval (PAIR). Information regarding the status of an application can be obtained from the (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Applicants are invited to contact the Office to schedule either an in-person or a telephonic interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. /AZAM A ANSARI/ Primary Examiner, Art Unit 3621 February 20, 2026
Read full office action

Prosecution Timeline

Dec 18, 2024
Application Filed
Aug 07, 2025
Non-Final Rejection — §101, §103, §DP
Feb 11, 2026
Response Filed
Feb 20, 2026
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591892
SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR EARLY DETECTION OF A MERCHANT DATA BREACH THROUGH MACHINE-LEARNING ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12499471
AUTOMATICALLY GENERATING A RETAILER-SPECIFIC BRAND PAGE BASED ON A MACHINE LEARNING PREDICTION OF ITEM AVAILABILITY
2y 5m to grant Granted Dec 16, 2025
Patent 12469042
SYSTEM FOR GENERATING A NON-FUNGIBLE TOKEN INCLUDING MUTABLE AND IMMUTABLE ATTRIBUTES AND RELATED METHODS
2y 5m to grant Granted Nov 11, 2025
Patent 12423918
AUGMENTED REALITY IN-APPLICATION ADVERTISEMENTS
2y 5m to grant Granted Sep 23, 2025
Patent 12417468
USER ENGAGEMENT MODELING FOR ENGAGEMENT OPTIMIZATION
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
98%
With Interview (+49.7%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month