Prosecution Insights
Last updated: April 19, 2026
Application No. 18/226,259

SYSTEMS AND METHODS OF GENERATING CONSCIOUSNESS AFFECTS USING ONE OR MORE NON-BIOLOGICAL INPUTS

Final Rejection §103
Filed
Jul 26, 2023
Examiner
PAN, YONGJIA
Art Unit
2118
Tech Center
2100 — Computer Architecture & Software
Assignee
Twiin Inc.
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
3y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
367 granted / 571 resolved
+9.3% vs TC avg
Strong +32% interview lift
Without
With
+32.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 571 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to amendments filed on November 14, 2025. Claims 51, 63, 64, and 65 have been amended. Claims 51-65 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 51, 53, 60, and 63 are rejected under 35 U.S.C. 103 as being unpatentable over Guzak et al. (US Publication 20110040155A1) in further view of Gansca et al. (US Publication 20140215351A1), Johnson et al. (US Publication 20150150023A1), Albouyeh et al. (US Publication 20170344225A1), and Singh (US Publication 20170339162A1). Regarding claim 51, Guzak teaches a method of generating a visual consciousness affect representation, said method comprising: receiving, from memory of a client device and/or a server, one or more shares originating from one or more users … a client device application presented on one or more client devices, each of said shares contains one or more submissions (a method and computer program product for incorporating human emotions in a computing environment. In this aspect, sensory inputs of a user can be received by a computing device … the input 122 can include voice input, which is processed in accordance with a set of one or more voice recognition and/or voice analysis programs)([[0005], [0042], shared input (e.g. voice submission) is received from client devices); receiving, from said memory of said client device and/or said server, a non-biological input not originating from one or more of said users and said non-biological input originating from a device or a module (input manually entered by a user via a peripheral (keyboard, mouse, microphone, etc.), and other environmental inputs (e.g., video, audio, etc.) gathered by capture devices)([0010]); extracting, from each submission and said non-biological input, one or more categories of each of one or more consciousness input types to generate a list identifying one or more extracted categories (aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.))([0022]; a list of extracted categories is generated from inputs); calculating, using a client module on said client device and/or a server module and using one or more of said shares and said non-biological input … a dominate category of one or more of said shares and said non-biological input … (Results from each sensory channel can be aggregated to determine a current emotional state of a user … The sensory aggregator 132 can use the input 124 to generate standardized emotion data 126, which an emotion data consumer 134 utilizes to produce emotion adjusted output 128 presentable upon output device 135 … positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.) … standardized emotion datum 126 result from combining positive and negative scores … assessing whether the resultant score exceeds a previously established certainty threshold … The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category … and the like)([0010], [0021], [0022], and [0047]; a dominate category is calculated from combining scores of extracted categories); determining, using said client module on said client device and/or said server module on said server and based on one or more of said shares and said non-biological input, an intensity of said dominant category of one or more of said shares and said non-biological input (The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include input category … a value, strength … and the like)([0047]); storing, in memory of said server and/or said client device, said dominant category of one or more of said shares and said non-biological input and said intensity of said dominant category (For each user, historical data 124, 126 can be maintained)([0058]); conveying, using said client module and/or said server module, said dominant category of one or more of said shares … said dominant category from said client device and/or said server … (The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category, name/identifier … and the like)([0047]; attributes including a category and reference value (i.e. name/identifier) are conveyed from client devices); and visually presenting, on said display interface of said plurality of client devices, one or more of said shares and said visual consciousness affect representation corresponding to one or more of said shares … wherein said consciousness affect representation is based on said dominant category of one or more of said shares posted on … said client device application and said non-biological input, wherein said visual consciousness affect is chosen from a group comprising color, weather pattern, image, and animation … (The emotion dimension values from each of the sensory channels can be aggregated to generate at least one emotion datum value, which is a standards-defined value for an emotional characteristic of the user ... The output handler 254 can alter application output based upon datum 126. For example, handler 254 can generate text, images, sounds, and the like that correspond to a given emotion datum 126)([0005] and [0061]). Guzak differs from the claim in that Guzak fails to teach the share submissions are posted on a website, conveying a category of said shares and said intensity to said website and/or said client device application presented on a plurality of said client devices, and visually presenting a visual consciousness affect representation corresponding to said shares, wherein said visual consciousness affect representation appears adjacent to said shares and is based on said category of said shares posted on said website and/or said client device application. However, share submissions originating from users and posted on a website or application (i.e. social web page or networking application), conveying a category of said shares and said intensity to said website and/or said client device application presented on a plurality of said client devices, and visually presenting a visual consciousness affect representation corresponding to said shares, wherein said visual consciousness affect representation appears adjacent to said shares and is based on said category of said shares posted on said website and/or said client device application is taught by Gansca (Referring now to FIG. 2( b), a user may scroll down in the list of topics until a user reaches a topic of interest 224, for example, “Working on xmas.” ... In FIG. 2( c), a sentiment thermometer 272 is provided to a user. The meaning (e.g., sentiment associated) with each color in the sentiment thermometer 272 is identical to the meaning (e.g., sentiment associated) with each color in the graphical object 122 ... The user may scroll upwards (FIG. 2( d)) or downwards (FIG. 2( e)) from the view shown in FIG. 2( c) to choose a desired color to be associated with the selected topic 224 ... A different sentiment is associated with each color to facilitate the expression of varying degrees of positive and negative (or neutral) sentiment ... As shown in FIG. 2( e), a user may select, for example, the color dark blue (237) ... the user is taken to a screen shot 239 shown in FIG. 2( f), where the user may leave an optional narrative, for example, regarding the selected topic 224, in field 242 ... Clicking on the entry 236 takes a user to a screen where the entry 236 is displayed, along with any comments 252 and/or likes 256 the entry 236, as shown in FIG. 2( i) ... In some embodiments, the users may express their sentiment via any platform, including but not limited to, social networking applications, web page, smartphone application, text message, and any other platform where a user has the ability to make a color selection)([0078], [0081], [0084], [0090]; Figures 2a-2i – an exemplary embodiment of a user sharing submissions and intensity to a social network to be conveyed to other users is shown, the affect representation (i.e. sentiment associated with dark blue) is presented adjacent to the “So not fun!” share). The examiner notes Guzak and Gansca teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak to include the sharing, conveying, and presenting of Gansca such that the method shares submissions to a social network to visually convey a user’s consciousness affect. One would be motivated to make such a combination to provide the advantage of allowing a user to uniquely express their opinion in a social network. The combination of Guzak-Gansca fails to teach the dominate category corresponds to a category having highest contribution value. However, calculating a dominate category which corresponds to a category having highest contribution value is taught by Johnson (Referring now to FIG. 9, a flowchart of a process 900 for identifying an emotion, a cognitive state, a sentiment, or other attribute associated with a document ... e.g., a webpage ... If the number of positive words or phrases exceeds the number of negative words or phrase ... setting the primary document emotion variable to the emotion associated with the highest positive frame count (e.g., Crave>Happiness>Gratitude) … If the number of positive words or phrases is less than the number of negative words or phrases ... process 900 may include setting the primary document emotion variable to the emotion associated with the highest negative frame count)([0178], [0184], and [0185]; a dominate (i.e., primary) category is based on which categories count (e.g., Crave, Happiness, etc.) is the highest). The examiner notes Guzak, Gansca, and Johnson teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca to include the calculating of Johnson such that the method calculates a dominate category by determining which category has highest contribution value. One would be motivated to make such a combination to provide the advantage of facilitating analysis of big data. The combination of Guzak-Gansca-Johnson fails to teach said visual consciousness affect representation is a predetermined size depending upon said calculated value obtained from intensity of said category. However, a visual consciousness affect representation being predetermined size based upon a calculated value obtained from intensity of a category is taught by Albouyeh (the visual characteristics of the visual indicators may be altered to convey the sentiment ... Visual characteristics of the visual indicators may include icon size, icon shape, icon color, icon labels, icon patterns, icon borders, and so forth)([0037]; sentiment is calculated value of intensity of a category). The examiner notes Guzak, Gansca, Johnson, and Albouyeh teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca-Johnson to include the predetermined size representation of Albouyeh such that the method varies a size of a visual consciousness affect based on calculated intensity of a category. One would be motivated to make such a combination to provide the advantage of providing additional graphical representations to convey consciousness. The combination of Guzak-Gansca-Johnson-Albouyeh fails to teach calculating an age for shares and non-biological input, wherein said age is a difference in time between origin and reception of the shares and the non-biological input. However, calculating an age for shares and non-biological input, wherein said age is a difference in time between origin and reception of the shares and the non-biological input is taught by Singh (A user can control his location privacy information by delaying sharing any information via social media … Examples of present social media updates include: photographs, videos, text status updates, and the social media updates … timeline module 102 receives a social media update to be uploaded ... The timeline module 102 responds with a suggested timeline to the user interface that shows a distribution over time … the user is presented with a user interface and a suggested timeline based at least in part on object metadata associated with the social media update ... metadata can include information such as location metadata)([0005], [0024], [0027], and [0062]; an age (i.e. delay) is calculated for shares and non-biological input (e.g., location meta-data), the delay is a time difference between capturing and reception of the capturing (i.e., time between capturing social media update and sharing the update)). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Singh teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca-Johnson-Albouyeh to include the calculating of Singh such that the method calculates an age for shares and non-biological input, wherein said age is a difference in time between origin and reception of the shares and the non-biological input. One would be motivated to make such a combination to provide the advantage of protecting a user’s privacy. Regarding claim 53, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method of generating a visual consciousness affect representation of claim 1, wherein said consciousness input includes at least one input chosen from a group comprising emotional state input, reasoned input, location information input, physical awareness input and spiritual insight input, and said non-biological input is at least one input chosen from a group comprising emotional state input, reasoned input, location information input, physical awareness input and synchronicity input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy ... negative emotions ... sad … the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105. Other body language interpretation analytics can also be performed to determine sensory input (i.e., body language can be analyzed to determine if user 105 is nervous, calm, indecisive, etc.))([0022] and [0037]). Regarding claim 60, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method of generating a visual consciousness affect representation of claim 1, wherein a first intensity information accompanies each of one or more of said submissions, said non-biological input contains a second intensity information (Guzak - In channel processor 130 can include ... a data store 220, and/or other such components. The data store 220 can include device specific data 222, user specific data 224, configurable channel specific rules 226, and mappings to a specific standard 228 to which the generated processed input 124 conforms)([0048]; each input is associated with corresponding intensity (i.e., sense of force) information such as rules, mappings, etc.) and said extracting comprises: identifying, in each of said submissions and said non-biological input, information relating to one or more consciousness input types (Guzak - The input/output 122-128 can include numerous attributes defining a data instance)([0047]); and and extracting, from said information relating to one or more of said consciousness input types, information relating to one or more said categories of each of said consciousness input types (“categories”) to generate said list identifying one or more extracted categories from each of said submissions and said non-biological input, and wherein each of said extracted categories is assigned a predetermined value that is at least in part based on said first intensity information (Guzak - The dimensional emotion evaluation component 216 converts a score/value computed by the processing component 212 into a standardized value/form. Component 216 can use to-standard mapping data 228)([0054]; corresponding information such as mappings information is used to assign a value (i.e., positive or negative score) to extracted categories). Regarding system claim 63, the claim generally corresponds to method claim 1 and recites similar features in system form; therefore, the claim is rejected under similar rational. Claim 52 is rejected under 35 U.S.C. 103 as being unpatentable over Guzak, Gansca, Johnson, Albouyeh, Singh, and in further view of Sadanandan et al. (US Publication 20130282808A1). Regarding claim 52, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach visually presenting an illustration or photo of an object associated with said identified category. However, teach visually presenting an illustration or photo of an object associated with an identified category is taught by Sadanandan (The script 116 uses the keywords identified from the analysis of the textual content ... to identify the corresponding mood indicators ... The script 116 then identifies the current mood or state of mind of the user using the mood and context indicators ... The updated user-profile image is packaged with the webpage and transmitted to the client-device for rendering)([0031], [0032], and [0033]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Sadanandan teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the presenting of Guzak-Gansca-Johnson-Albouyeh-Singh to include the presenting Sadanandan such that the method presents an object indicative of an identified mood category. One would be motivated to make such a combination to provide the advantage of improving the conveyance of a visual consciousness by automatically updating a realistic appearance of a user based on the user's current state of mind or user's contextual interest. Claim 54 is rejected under 35 U.S.C. 103 as being unpatentable over Guzak, Gansca, Johnson, Albouyeh, Singh, and in further view of Kaleal (US Publication 20160086500A1). Regarding claim 54, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, wherein said emotional state input represents an emotional state of said user (Guzak - metadata of the user input 181 can be evaluated to determine emotions of user 105 (e.g., typing pattern analysis, hand steadiness when manipulating a joystick/pointer, etc.))([0038]), said reasoned input represents an expression of said user (Guzak - the processed input 122 can include ... user's facial expressions)([0051]). Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach the input includes location information input representing location of said client devices and physical awareness input including one information, associated with said user and chosen from a group comprising general health information, body type, and biology awareness. However, Kaleal discloses of location information input representing location of said client devices (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]) and physical awareness input including one information associated with a user and chosen from a group comprising general health information, body type, and biology awareness (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect. Claims 55-59 are rejected under 35 U.S.C. 103 as being unpatentable over Guzak, Gansca, Johnson, Albouyeh, Singh, and in further view of Kaleal and Gilley et al. (US Publication 20150004578A1). Regarding claim 55, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, wherein said emotional state input includes one category chosen from a group comprising love, no love, joy, sad, concerned, annoyed, trust, defiant, peaceful, aggressive, accept, reject, interested, distracted, optimistic and doubtful, and said emotional state input is not the same as reasoned input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are not the same as reasoned input (e.g. visual facial input)). Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect. The combination of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect. Regarding claim 56, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, wherein said reasoned input includes one category chosen from a group comprising understood, solve, recognize, sight, hear, smell, touch, and taste, and said reasoned input is not the same as emotional state input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) input is not the same as emotional input (e.g. happy)). Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect. The combination of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect. Regarding claim 57, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, wherein generating a visual consciousness affect representation includes different emotional state input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are different input) and reasoned input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) is different input). Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach the input is not the same as physical awareness input including one category chosen from a group comprising fit, not fit, energetic, tired, healthy, sick, hungry and full and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) including one category chosen from a group comprising fit, not fit, energetic, tired, healthy, sick, hungry and full (how the user is feeling (e.g., sore, sick, energized, sad, tired, etc.))([0245]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect. The combination of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect. Regarding claim 58, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, wherein generating a visual consciousness affect representation includes different emotional state input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are different input), reasoned input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) is different input), and input includes one category chosen from a group comprising attraction, repulsion, calm, unrest, anticipate, remember, solitude, and congestion (Guzak - aggregator 132 can initially classify inputs 124 as being ... calm)([0022]). Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect. The combination of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect. Regarding claim 59, Guzak-Gansca-Johnson-Albouyeh-Singh teach the method as applied above, wherein generating a visual consciousness affect representation includes different emotional state input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are different input) and reasoned input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) is different input). Guzak-Gansca-Johnson-Albouyeh-Singh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh-Singh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect. The combination of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal fails to teach the input is not the same as spiritual insight input including one category chosen from a group comprising hug, missing, energy, shield, flash, deja vu, presence, and universe. However, different spiritual insight input including one category chosen from a group comprising hug, missing, energy, shield, flash, deja vu, presence, and universe is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]; spiritual (e.g. energy, universe, and presence (i.e. sense of belonging) is different input). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Singh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Singh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect. Allowable Subject Matter Claims 61, 62, and 64 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 65 is allowed. Response to Arguments Applicant's arguments with respect to claims 51-64 have been considered but are moot in view of the new ground(s) of rejection. Regarding claims 51 and 63, the applicant argues Guzak teaches away from the textual calculating taught by Johnson; the examiner respectfully disagrees. The examiner notes, a prior art’s mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed. In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004). In other words, while Guzak does state use of emotes (i.e., an animated image) is a weakness within plain text “One current weakness with interactions between humans and machines relates to emotional expressiveness. A common way for communicating emotions within electronic correspondence is to type a description of emotions within plain text, often preceded with an “emote” tag” ([0003]), Guzak does not discourage using text to determine emotional state. Guzak actually encourages using text by translating voice to text for processing emotional state “when the processed input 122 is voice input, user specific language, dialect, grammar patterns, and speech characteristics can be included in data 224 and used by a speech to text converter and/or a speech pattern analyzer (both being instances of processing component 212)” ([0051]). As both Guzak and Johnson use text to determine emotional state, Guzak does not teach away from the textual calculating taught by Johnson. Regarding claims 51 and 63, the applicant argues the combination of Guzak and Johnson renders Guzak unsatisfactory for its intended purpose; the examiner respectfully disagrees. The examiner recognizes a proposed modification would render the prior art invention being modified unsatisfactory for its intended purpose, then there is no suggestion or motivation to make the proposed modification. In re Gordon, 733 F.2d 900, 221 USPQ 1125 (Fed. Cir. 1984)( "The question is not whether a patentable distinction is created by viewing a prior art apparatus from one direction and a claimed apparatus from another, but, rather, whether it would have been obvious from a fair reading of the prior art reference as a whole to turn the prior art apparatus upside down"). However, unlike In re Gordon where the combination would be inoperable for its intended purpose (i.e. to be filtered gasoline being trapped at the top, water and heavier oils flow out of the outlet instead of the purified gasoline, and clogging of the screen) the combination of Guzak and Johnson would not be inoperable. No feature is destroyed in the combination of Guzak and Johnson, instead features of Guzak would be improved utilizing the features of Johnson. That is, Guzak features determining a dominate category of input by combining positive and negative emotion scores and comparing the resultant score to a threshold (i.e., determine a dominate category based on an overall count of categories) “Results from each sensory channel can be aggregated to determine a current emotional state of a user … The sensory aggregator 132 can use the input 124 to generate standardized emotion data 126, which an emotion data consumer 134 utilizes to produce emotion adjusted output 128 presentable upon output device 135 … positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.) … standardized emotion datum 126 result from combining positive and negative scores … assessing whether the resultant score exceeds a previously established certainty threshold … The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category … and the like” ([0010], [0021], [0022], and [0047]). Johnson similarly discloses determining a dominate category of input, in particular, Johnson discloses calculating a dominate category which corresponds to a category having highest contribution value “Referring now to FIG. 9, a flowchart of a process 900 for identifying an emotion, a cognitive state, a sentiment, or other attribute associated with a document ... e.g., a webpage ... If the number of positive words or phrases exceeds the number of negative words or phrase ... setting the primary document emotion variable to the emotion associated with the highest positive frame count (e.g., Crave>Happiness>Gratitude) … If the number of positive words or phrases is less than the number of negative words or phrases ... process 900 may include setting the primary document emotion variable to the emotion associated with the highest negative frame count” ([0178], [0184], and [0185]). That is, Johnson improves on determining a dominate category by determining which specific category count (e.g., Crave, Happiness, etc.) is the highest. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, examiner notes Guzak and Johnson teach generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak to include the calculating of Johnson such that a dominate category is calculated by determining which category has highest contribution value. One would be motivated to make such a combination to provide the advantage of facilitating analysis of big data. Conclusion The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider the reference fully when responding to this action. The document cited therein and enumerated below teaches a method and apparatus for calculating an age (e.g., delay) between origin and receiving shares and input. US20130340089A1 US20150237464A1 US20150281159A1 US20160156583A1 US20160212085A1 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yongjia Pan whose telephone number is (571)270-1177. The examiner can normally be reached Monday - Friday, 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Baderman can be reached at 571-272-3644. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YONGJIA PAN/Primary Examiner, Art Unit 2118
Read full office action

Prosecution Timeline

Jul 26, 2023
Application Filed
Mar 07, 2024
Non-Final Rejection — §103
Jun 12, 2024
Response Filed
Jul 13, 2024
Final Rejection — §103
Dec 18, 2024
Request for Continued Examination
Dec 31, 2024
Response after Non-Final Action
May 08, 2025
Non-Final Rejection — §103
Nov 14, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589422
MEANDERING AMOUNT DETECTION METHOD AND MEANDERING CONTROL METHOD FOR METAL STRIP
2y 5m to grant Granted Mar 31, 2026
Patent 12588654
METHOD AND APPARATUS FOR OPERATING A ROTARY MILKING PLATFORM TO MAXIMISE THE NUMBER OF ANIMALS MILKED PER UNIT TIME AND A ROTARY MILKING PLATFORM
2y 5m to grant Granted Mar 31, 2026
Patent 12566428
INDUSTRIAL DATA SOURCE WRITEBACK
2y 5m to grant Granted Mar 03, 2026
Patent 12543024
WORKSITE CONNECTIVITY SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12535796
EMBEDDED SENSOR CHIPS IN 3D AND 4D PRINTED STRUCTURES THROUGH SELECTIVE FILAMENT INFUSION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.0%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 571 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month