DETAILED ACTION
1. This Office Action is in response to the Amendment filed on 01/27/2026.
2. The IDS filed on 02/03/2026 is considered and entered.
3. Claims 1-20 are pending. All the pending claims are examined herein.
Response to Arguments
4. Applicant’s arguments, see Argument section, filed 01/27/2026, with respect to the rejection(s) of the pending claims have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of YOSHIKAWA et al (US 20200184843 A1).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1, 2, and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Kratz et al (US 20240027394 A1) in view of YOSHIKAWA et al (US 20200184843 A1).
Kratz et al (“Kratz”) is directed to SMART DEVICE INCLUDING OLFACTORY SENSING.
YOSHIKAWA et al (“YOSHIKAWA”) provides an information processing apparatus capable of presenting information in a presentation way that allows each user to easily accept information.
As per claim 1, Kratz discloses a method (flowchart of Fig. 4) in an electronic device (smartphone 100), the method comprising:
determining, with a communication device, that the electronic device is in communication, across a network, with a remote electronic device implementing a sensory element user interface presentation mode of operation (As illustrated in Fig. 3, [0032] The sender smartphone 100A sends the cloned image 312 and the olfactory sticker 204 to the receiver smartphone 100B. The receiver smartphone 100B displays the cloned image 312 with the olfactory sticker 204 on the user interface 202B of the receiver smartphone 100B.);
and pushing, with the communication device, transmission of the user interface sensory element bundle to the remote electronic device ([0030] the user interface 202 is configured to send and receive the olfactory sticker 204 encoded with olfactory information corresponding to a particular scent. [0034] In one example, the user of the sender smartphone 100A may select which olfactory sticker 204 to send from a displayed set of predefined olfactory stickers 204 that correlate to predefined scents that are stored in the memory 110 of the sender smartphone 100A).
constructing, with one or more processors, a user interface sensory element bundle ([0034] the receiver smartphone 100B has access to an identical, or a different, set of predefined olfactory stickers 204 to easily preview what particular scent has been received without activating the olfactory sticker 204). Examiner’s note: the different, set of predefined olfactory stickers 204 are considered by the examiner as sensory element bundle).
Kratz, however, falls short to describe what is in the set of predefined olfactory stickers 204 (i.e., user interface sensory element bundle), that is, Kratz fails to discloses a user interface sensory element bundle comprising at least a first user sensory element configuration catering to a first human sense and a second user sensory element configuration catering to a second human sense, wherein the first human sense and the second human sense are different human senses.
YOSHIKAWA, on the other hand, discloses that the information processing apparatus including a processing unit configured to present content to a user on a basis of sensor information acquired by a sensor configured to detect reaction of the user to a stimulus given to the user (Abstract) . YOSHIKAWA further discloses [0034] In one example, as illustrated in FIG. 2, it is assumed that there is an object 10 to be impressed by a viewer (hereinafter also referred to as “impression object”) among objects included in the video content. Only the addition of stimulus information is set in the video content in presenting an impression object 10. [0035] Here, it is said that the way of recognizing the information differs for each user depending on the user's dominant sense. In the present disclosure, differences in information recognition depending on the user's dominant sense are represented by classifying them into sense types. Here, three sense types are set, for example, visual sense type in which visual sense works dominantly, auditory sense type in which auditory sense works dominantly, and tactile sense type in which tactual sense works dominantly.
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine one or more dominant sensory element or sense type to a first user that is different from the second user’s dominant sensory element. Thus, presentation of each information item based on the user model that is set for each user makes it possible to appropriately stimulate different dominant senses for each user, thereby allowing each user to easily accept information.
Therefore, it would have been obvious to combine YOSHIKAWA with Kratz to obtain the invention as specified in claim 1.
As per claim 2. Kratz in view of YOSHIKAWA further discloses the method of claim 1, wherein the user interface sensory element bundle comprises a plurality of user interface sensory element configurations (Kratz, [0031]The olfactory sticker 204 may also include text instructions 208 indicating how to engage and activate the olfactory sticker 204 to release the scent via the olfactory transducer 200. In the example shown in FIG. 2, the text instruction 208 is “Rub Me!” to indicate to the user that the olfactory sticker 204 displayed on user interface 202 must be rubbed to release the scent represented by the olfactory sticker 204. Other text, such as, “Tap Me!” may be used to indicate to the user that the olfactory sticker 204 must be tapped to release the scent represented by the olfactory sticker 204. The configuration of rubbing the olfactory sticker 204 simulates similar, physical scent-base stickers that are activated by rubbing or scratching to provide an intuitive user interface).
As per claim 3, Kratz in view of YOSHIKAWA discloses that the method of claim 2, further comprising encoding a dominant sensory profile into metadata of each user interface sensory element configuration contained in the user interface sensory element bundle (YOSHIKAWA, [0007] provided an information processing method including: presenting content together with a stimulus; detecting, by a sensor, reaction of a user to the stimulus given to the user; and estimating a dominant sense of the user on a basis of sense information acquired by the sensor. Also see [0032, 0033, 0035]).
6. Claims 4, 5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kratz et al in view of YOSHIKAWA et al and Kugimiya et al (US 7858036 B2).
Kugimiya et al (“ Kugimiya”) relates to a taste recognition apparatus and a taste recognition system.
As per claim 4, Kratz in view of YOSHIKAWA discloses the plurality of user interface sensory element configurations comprises: an eye-minded user interface sensory element configuration (Kratz, see visual elements of Fig. 3, also see [0021]);
a smell-minded user interface sensory element configuration (Kratz, [0009] FIG. 4 is a flowchart of a method of sending and receiving an olfactory sticker; also see detected scents of Fig. 8);
an ear-minded user interface sensory element configuration (Kratz, [0021] For example, the transceivers 165, 170 provide two-way wireless communication of information including digitized audio signals); and
a motor-minded user interface sensory element configuration (Kratz, [0038] At block 406, the user of the receiver smartphone 100B activates the olfactory sticker 204 by interacting with the displayed olfactory sticker 204. For example, the user taps or rubs the displayed olfactory sticker 204. The user can determine the intensity of the released scent by controlling the intensity of the interaction with the displayed olfactory sticker 204).
But Kratz in view of YOSHIKAWA falls short to discloses a taste-minded user interface sensory element configuration.
Kugimiya, on the other hand, discloses a taste recognizing device has a sensor body and a touch panel (see at least Abstract) .
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to incorporate the missing taste sensor with the device of Kratz in view of YOSHIKAWA so that in addition to hearing, sight, smell, and touch, the user of Kratz will be able to employ a taste user interface in his system.
Therefore, it would have been obvious to combine Kugimiya with Kratz in view of YOSHIKAWA to obtain the invention as specified in claim 4.
As per claim 5, Kratz in view of in view of YOSHIKAWA and Kugimiya discloses that the method of claim 4, wherein each user interface sensory element configuration of the user interface sensory element bundle comprises one or more of: user input controls; navigational elements; and/or containers (Kratz, [0030] FIG. 2 illustrates a user interface 202 forming the user input layer 140 on the display 145 of the smartphone 100 and displaying an olfactory sticker 204. [0032] FIG. 3 illustrates a sender smartphone 100A having a display 145A including user interface 202A and a receiver smartphone 100B having a display 145B including user interface 202B).
As per claim 8, Kratz in view of YOSHIKAWA and Kugimiya discloses that the method of claim 4, further comprising determining, by the one or more processors using the communication device, whether the remote electronic device allows remote selection of a user interface sensory element configuration from the user interface sensory element bundle as a function of a dominant sensory profile associated with the user of the remote electronic device (YOSHIKAWA, [0007] In addition, according to the present disclosure, there is provided an information processing method including: presenting content together with a stimulus; detecting, by a sensor, reaction of a user to the stimulus given to the user; and estimating a dominant sense of the user on a basis of sense information acquired by the sensor. [0038] Thus, in the present disclosure, the user's dominant sense is estimated and information is presented to the user in such a manner to stimulate the dominant sense. In one example, in FIG. 2, it is assumed that an elephant included in the video content is impressed to the user as an impression object 10. In this event, when the user's sense type is the visual sense type, the information processing apparatus can scale up or down, move, or blink the elephant that is the impression object 10 in such a manner to stimulate the visual sense that is the dominant sense).
7. Claims 12 are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al (US 20170131689 A1) in view of YOSHIKAWA et al (US 20200184843 A1).
Chan is directed to communication of physical scents and scent representations
As per claim 12, Chan disclose an electronic device (Fig. 3), comprising: a communication device; and one or more processors operable with the communication device (Figs. 3 and 6);
Chan further disclose wherein the one or more processors are configured to, in response to the communication device determining that a remote electronic device in communication with the communication device across a network is implementing a sensory element user interface presentation mode of operation, compile a plurality of user interface sensory element configurations contained in a user interface sensory element bundle from which a user interface sensory element configuration can be selected as a function of (Chan, [0027] FIG. 3 is a network diagram of an operational environment for an olfactory communication program 101, in accordance with at least one embodiment of the invention. In FIG. 3, the olfactory communication program 101 may be in communication with a user 300 via a user device 301. More specifically, the olfactory communication program 101 may receive an olfactory artefact 102 from the user device 301. The one or more olfactory artefacts 102 may be accessible by a user 300 via a number of environments, including a network 302, such as the Internet, locally stored data on a user device 301, such as a desktop computer or mobile device, from a database 201, or from a remote location, such as a server 303. Additionally, the olfactory communication program 101 may be in communication with a location-based scent indication 106 via a wireless network, RFID, or NFC, or a physical scent 104 via the physical environment 304.
But Chan does not mention a dominant sensory profile as recited in the claim.
YOSHIKAWA, on the other hand , discloses a dominant sensory profile ([0032] The information processing apparatus according to the present embodiment presents information using a presentation way that allows each user to easily accept information. This technology estimates a sense that works dominantly among the user's senses (hereinafter also referred to as “dominant sense”) and presents information in such a manner to stimulate the dominant sense, so that the user accepts information without being subjected to load. It is possible to estimate the dominant sense of each user on the basis of the user's biometric information. Also see [0033 and 0035]),
Before effective filling date of the invention, since Chen and YOSHIKAWA are in the same field of endeavor, incorporating YOSHIKAWA with Chen provides a preferred interaction with the impression object using primary or dominant sensory element of an impression object . That is, for each user makes it possible to appropriately stimulate different dominant senses for each user, thereby allowing each user to easily accept information
Therefore, it would have been obvious to combine YOSHIKAWA with Chen to obtain the invention as specified in claim 12.
8. Claims 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al (US 20170131689 A1) in view of YOSHIKAWA et al (US 20200184843 A1) and KURISU ET AL ( US 2023/0004279 A1 ).
As per claim 13, Chan in view of YOSHIKAWA discloses the electronic device of claim 12, wherein each user interface sensory element configuration of the plurality of user interface sensory element configurations contained in the user interface sensory element bundle (YOSHIKAWA, Fig. 2 and [0032]- [0035])
But the user interface sensory element bundle of Chan in view of YOSHIKAWA does not show or include text that is different from any other user interface sensory element configuration contained in the plurality of user interface sensory element configurations contained in the user interface sensory element bundle.
Kurisu on the other hand, discloses , text that is different from any other user interface sensory element configuration [0035] The auxiliary storage device 104 stores an expression database (hereinafter, the term “database” is referred to as a DB) as shown in FIG. 4. In this expression DB, a sample ID and one or more expressions (that is, one or more expressions which expressed the smell of sample 60 by the character strings) about the smell stimulated by sample 60 corresponding to the sample ID, are recorded with their correspondence. That is, the expression is used as a means for communicating the smell of sample 60 to others when the user smells the smell. This expression may be an expression using any part of speech, such as a noun or adjective, and covers from direct to indirect expression of the smell. Here, the direct expression refers to an expression commonly used to recall an idea of a smell, for example, “sweet” or “fruity”, while the indirect expression refers to an expression that is not commonly used to recall an expression of a smell as compared with the direct expression described above, for example, “spring”, “morning” or “walking”. The indirect expression is a secondary expression that is recalled from a direct expression with respect to the direct expression, and may be an expression that abstractly represents a smell compared to the direct expression. Also see Figs.8-13).
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the textual expression of Kurisu with the teaching of Chen in view of YOSHIKAWA .
The suggestion /motivation for doing so would have been to provide textual expression to the sensed element . For example, in the example of FIG. 8, the user can recognize that the smell of sample 60a is typically a smell expressed as “sweet” or “relaxation”, but also a component of the smell expressed as “flower”, “flower” or “fruit”, and further an abstract event such as “spring” is associated from the smell. Also see Kurisu, [0035, 0048]).
Therefore, it would have been obvious to combine Chan in view of YOSHIKAWA with KURISU to obtain the invention as specified in claim 13.
As per claim 14, Chan in view of YOSHIKAWA and KURISU discloses the electronic device of claim 12, wherein a first user interface sensory element configuration of the plurality of user interface sensory element configurations contained in the user interface sensory element bundle is enhanced as a function of a first combination of a visual appearance preferred by the authorized user of the electronic device, an olfactory appearance preferred by the authorized user of the electronic device, an aural appearance preferred by the authorized user of the electronic device, a gustatory appearance preferred by the authorized user of the electronic device, and a haptic appearance preferred by the authorized user of the electronic device and diminished as a second combination of the visual appearance preferred by the authorized user of the electronic device, the olfactory appearance preferred by the authorized user of the electronic device, the aural appearance preferred by the authorized user of the electronic device, the gustatory appearance preferred by the authorized user of the electronic device, and the haptic appearance preferred by the authorized user of the electronic device (Kurisu, [0046] The user may select any of the samples 60a-60i to smell the smell and place sample 60 of the smell he or she prefers on first tray 50a on the sensing surface SA. Sample identifying means 11 of information processing device 10 identifies (in step S13) which sample 60 is placed on the sensing surface, based on the reading result (in step S13) of the sample ID by sensor device 40. [0048] For example, if sample 60a is placed on first tray 50a, as shown in FIG. 8, an expression of the smell of sample 60a is displayed within a fan shape of an arbitrary size centered on, for example, the sensing surface, i.e., the position where sample 60a is placed. The appearance of each expression at this time is an appearance corresponding to the relationship between the expression and sample 60g. The user can know how to express the smell of sample 60a while watching these expressions. At the same time, the user can also know the relationship between the smell of sample 60g and each expression with reference to the appearance of each expression. For example, in the example of FIG. 8, the user can recognize that the smell of sample 60a is typically a smell expressed as “sweet” or “relaxation”, but also a component of the smell expressed as “flower”, “flower” or “fruit”, and further an abstract event such as “spring” is associated from the smell).
As per claim 15, Chan in view of YOSHIKAWA and KURISU discloses the electronic device of claim 12, wherein the plurality of user interface sensory element configurations comprises: an eye-minded user interface sensory element configuration (Kurisu, [0007] The display control means may be configured to display, in the first display mode, the expression group relating to one object identified by the identifying means, and to display a relationship image, which is an image indicating a relationship between the selected expression and other object corresponding to the sense of smell or taste associated with the expression, in a case that any one of the displayed expression groups is selected by the user);
a smell-minded user interface sensory element configuration (Kurisu, 0055] as shown in FIG. 10, the expression relating to the smell of sample 60b is displayed inside the fan figure centered on the sensing surface (step S16);
an ear-minded user interface sensory element configuration (Kurisu, [0045] Although the position, shape, and size of the sensing surface by sensor device 40 are arbitrarily determined, the user knows in advance where the sensing surface is located in the user interface device 30, or informs the user of the method by means of display, voice guidance, or the like).
a taste-minded user interface sensory element configuration (Kurisu, [0010] a display control means that displays a group of expressions relating to the sense of smell or taste stimulated by the identified object); and
a motor-minded user interface sensory element configuration (Kurisu, [0008] A plurality of objects corresponding to the sense of smell or taste may be placed on the user interface device. The identifying means may be configured to identify an object selected by the user among a plurality of objects that stimulate the user's sense of smell or taste. That is, Olfactory stickers are activated (i.e., release their scent) when they are tapped or rubbed by the recipient); and
wherein a corresponding dominant sensory profile is written to metadata of each user interface sensory element configuration ( Kurisu, [0051] For example, as illustrated in FIG. 9, if the expression “sweet” is selected by the user, an annular image surrounding the expression “sweet” is displayed, and an annular image surrounding the position of other samples (here, samples 60b, 60d, and 60g) represented by the expression “sweet” is displayed. By displaying such an annular image (related image), the user can know that there are samples 60b, 60d, and 60g in addition to sample 60a as the smell expressed as “sweet”).
As per claim 16, Chan in view of YOSHIKAWA and KURISU discloses that the electronic device of claim 12, wherein the one or more processors are further configured to obtain, using the communication device, the dominant sensory profile associated with the authorized user of the electronic device and select a user interface sensory element configuration from the plurality of user interface sensory element configurations contained in the user interface sensory element bundle (Kurisu , on the other hand , discloses a dominant sensory profile by highlighting the name or metadata of the object. [0051] For example, as illustrated in FIG. 9, if the expression “sweet” is selected by the user, an annular image surrounding the expression “sweet” is displayed, and an annular image surrounding the position of other samples (here, samples 60b, 60d, and 60g) represented by the expression “sweet” is displayed. By displaying such an annular image (related image), the user can know that there are samples 60b, 60d, and 60g in addition to sample 60a as the smell expressed as “sweet”.);
As per claim 17, Chan in view of YOSHIKAWA and KURISU discloses the electronic device of claim 16, wherein the one or more processors are further configured to cause, using the communication device, the remote electronic device to render the user interface sensory element configuration selected from the plurality of user interface sensory element configurations contained in the user interface sensory element bundle (Chen, [0028] In general, embodiments of the invention may export the encoded scent 201 to a number of environments, including a user device 301, such as a desktop computer or mobile device, a database 201, or a remote location, such as a server 303. More specifically, the olfactory communication program 101 may export the encoded scent 201 to a remote location, such as a server 303 that can execute one or more services in connection with displaying the encoded scent 201. For example, the server 303 may execute a social networking service (“SNS”), an email delivery service, or a website production service).
As per claim 18, Chan in view of YOSHIKAWA and KURISU discloses a method (Chan, flowcharts of Figs. 4 and 5) in an electronic device (Chan, e.g., device of Figs. 3 and 6) , the method comprising:
creating, using one or more processors, a plurality of user interface sensory element configurations each enhancing a different sensory appearance when rendered on a remote electronic device (Chan, [0028] In general, embodiments of the invention may export the encoded scent 201 to a number of environments, including a user device 301, such as a desktop computer or mobile device, a database 201, or a remote location, such as a server 303. More specifically, the olfactory communication program 101 may export the encoded scent 201 to a remote location, such as a server 303 that can execute one or more services in connection with displaying the encoded scent 201. For example, the server 303 may execute a social networking service (“SNS”), an email delivery service, or a website production service.
pushing, by a communication device, the user interface sensory element bundle to one or more remote electronic devices across a network (Chen, [0041] At step 503, in the case of the computer system 200 not being configured for olfactory sensing, identifying one or more olfactory artefacts 102 may include identifying a location-based scent indication 106. The location-based scent indication 106 may be in the form of a wireless network signal, an RFID signal, or a NFC signal. In addition to identifying an olfactory artefact 102 by identifying a location-based scent indication 106, the olfactory communication program 101 may identify an olfactory artefact 102 by receiving stored data from a database 202. The database 202 may be accessible by a user 300 via a number of environments, including locally stored data on a user device 301, such as a desktop computer or mobile device, network 302, such as the Internet, or from a remote location, such as a server 303. At step 504, the olfactory communication program 101 may generate an olfactory artefact 102 based on the location-based scent indication 106).
Chen in view of YOSHIKAWA does not seem to disclose writing, by the one or more processors, an enhanced sensory appearance to metadata of each user interface sensory element configuration of the plurality of user interface sensory element configurations.
Kurisu, on the other hand, discloses the above claimed limitation That is, Kurisu at [0035] discloses that the auxiliary storage device 104 stores an expression database (hereinafter, the term “database” is referred to as a DB) as shown in FIG. 4. In this expression DB, a sample ID and one or more expressions (that is, one or more expressions which expressed the smell of sample 60 by the character strings) about the smell stimulated by sample 60 corresponding to the sample ID, are recorded with their correspondence. That is, the expression is used as a means for communicating the smell of sample 60 to others when the user smells the smell. This expression may be an expression using any part of speech, such as a noun or adjective, and covers from direct to indirect expression of the smell.
Similarly Kurisu also discloses compiling, by the one or more processors, the plurality of user interface sensory element configurations into a user interface sensory element bundle (see the plurality of user interface sensory element configurations in Figs. 8-13) ;
Before effective filling date of the invention, since Chen and KURISU are in the same field of endeavor, incorporating KURISU with Chen in view of YOSHIKAWA provides visual interaction with the displayed sensory profile/elements and enables a user to visually grasp (e.g. through highlighting the target element ) an expression (e.g., sweety, fresh, etc.) and provides the smell expression of the highlighted element to the user (also see, KURISU, [0043]).
Therefore, it would have been obvious to combine KURISU with Chen in view of YOSHIKAWA to obtain the invention as specified in claim 18.
As per claim 19, Chan in view of YOSHIKAWA and Kurisu discloses the method of claim 18, further comprising writing, by the one or more processors to the metadata, a dominant sensory profile associated with the each user interface sensory element configuration of the plurality of user interface sensory element configurations (Kurisu, [0035] The auxiliary storage device 104 stores an expression database (hereinafter, the term “database” is referred to as a DB) as shown in FIG. 4. In this expression DB, a sample ID and one or more expressions (that is, one or more expressions which expressed the smell of sample 60 by the character strings) about the smell stimulated by sample 60 corresponding to the sample ID, are recorded with their correspondence. That is, the expression is used as a means for communicating the smell of sample 60 to others when the user smells the smell. This expression may be an expression using any part of speech, such as a noun or adjective, and covers from direct to indirect expression of the smell. Here, the direct expression refers to an expression commonly used to recall an idea of a smell, for example, “sweet” or “fruity”, while the indirect expression refers to an expression that is not commonly used to recall an expression of a smell as compared with the direct expression described above, for example, “spring”, “morning” or “walking”. The indirect expression is a secondary expression that is recalled from a direct expression with respect to the direct expression, and may be an expression that abstractly represents a smell compared to the direct expression).
As per claim 20, Chan in view of YOSHIKAWA and Kurisu discloses that the method of claim 19, further comprising causing, by the one or more processors, the one or more remote electronic devices to render a user interface sensory element configuration selected from the user interface sensory element bundle (Kurisu, [0008] A plurality of objects corresponding to the sense of smell or taste may be placed on the user interface device. The identifying means may be configured to identify an object selected by the user among a plurality of objects that stimulate the user's sense of smell or taste).
9. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Kratz et al in view of YOSHIKAWA et al and Kugimiya et al (US 7858036 B2) and further in view of KURISU et al (US 20230004279 A1).
As per claim 6, Although Kratz in view of YOSHIKAWA and Kugimiya discloses a plurality of user interface sensory element configurations (Kratz, e.g., various scents and tastes) each comprise informational components in comprising text (such as food smell, clean air smell, coffee smell, smoke-candle smell and food-bread-baking smell, (see Figs. 6 and 8).
But Kratz in view of YOSHIKAWA and Kugimiya is silent in describing that the text of each user interface sensory element configuration has one or both of adjectives and/or adverbs in the text that differs from each other user interface sensory element configuration of the plurality of user interface sensory element configurations, with those one or both of the adjectives and/or the adverbs enhancing a characteristic associated with at least one user interface element and diminishing another characteristic associated with at least one other user interface element.
KURISU, on the other discloses [0005] provides a mechanism capable of grasping the relationship between the smell or taste of an object and the expression of the smell or taste, and grasping what smell or taste the user himself/herself likes.
KURISU disclose user interface (e.g., Figs. 8-13) showing a text describing that the text of each user interface sensory element. The relationship between the smell or taste of an object and the expression of the smell or taste can be grasped, and the user can grasp what smell or taste they prefer. In the single-sample display mode, display control means displays a group of expressions relating to the olfactory sense stimulated by sample identified by sample identifying means, and if any expression is selected by the user, displays a relationship image indicating the relationship between the selected expression and other samples corresponding to the olfactory sense associated with the expression. In addition, in the multiple-sample display mode, display control means displays, for each of plural samples specific by sample identifying means, a group of expressions relating to the sense of smell stimulated by each sample, and displays the expression common to plural samples among the group of expressions to be distinguishable from the expression common to plural samples (see Abstract). Also see [0051] For example, as illustrated in FIG. 9, if the expression “sweet” is selected by the user, an annular image surrounding the expression “sweet” is displayed, and an annular image surrounding the position of other samples (here, samples 60b, 60d, and 60g) represented by the expression “sweet” is displayed. By displaying such an annular image (related image), the user can know that there are samples 60b, 60d, and 60g in addition to sample 60a as the smell expressed as “sweet”. Also see [0049] and Figs. 8-13).
Before effective filling date of the invention, since Kratz in view of YOSHIKAWA et al and Kugimiya discloses the sense of hearing, sight, smell, and touch, incorporating the teaching of KURISU with Kratz in view of YOSHIKAWA and Kugimiya enables a user to visually grasp the relationship between smell of an object and an expression of the smell, and provide information for grasping what kind of smell the user himself/herself prefers (also see, KURISU, [0043]).
Therefore, it would have been obvious to combine KURISU with Kratz in view of YOSHIKAWA and Kugimiya to obtain the invention as specified in claim 6.
As per claim 7, Kratz et al in view of YOSHIKAWA and Kugimiya and KURISU further discloses that the method of claim 6, wherein the one or both of the adjectives and/or the adverbs are encoded into metadata associated with each sensory element configuration (KURISU, [0035] The auxiliary storage device 104 stores an expression database (hereinafter, the term “database” is referred to as a DB) as shown in FIG. 4. In this expression DB, a sample ID and one or more expressions (that is, one or more expressions which expressed the smell of sample 60 by the character strings) about the smell stimulated by sample 60 corresponding to the sample ID, are recorded with their correspondence. That is, the expression is used as a means for communicating the smell of sample 60 to others when the user smells the smell. This expression may be an expression using any part of speech, such as a noun or adjective, and covers from direct to indirect expression of the smell. Here, the direct expression refers to an expression commonly used to recall an idea of a smell, for example, “sweet” or “fruity”, while the indirect expression refers to an expression that is not commonly used to recall an expression of a smell as compared with the direct expression described above, for example, “spring”, “morning” or “walking”. The indirect expression is a secondary expression that is recalled from a direct expression with respect to the direct expression, and may be an expression that abstractly represents a smell compared to the direct expression).
Allowable Subject Matter
10. Claims 9-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. For example retrieving, by the one or more processors, a user sensory preference reaction score from a user profile stored in a memory of the remote electronic device as recited in claim 9 seems allowable over the prior art of records if rewritten in independent form.
Conclusion
11. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/Primary Examiner, Art Unit 2174