Detailed Action
Notice of Pre-AIA or AIA status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements filed on October 11, 2024, and January 6, 2025 comply with the provisions of 37 C.F.R. § 1.97, 1.98, and MPEP § 609, and therefore have been placed in the application file. The information referred to therein has been considered as to the merits.
Claim Objections
The Office objects to claims 6 and 12 for having the following informalities. Appropriate correction is required.
Claim 6
In claim 6, the phrase “in response to duration of the press operation being less than a first threshold” is missing an article of speech before the word duration. The word “the” must be inserted in front of the word duration. Note that the definite article “the” is more appropriate in this context, because all press operations inherently have a duration. See MPEP § 2173.05(e).
It is also unclear why “the second location is any blank location in the first group chat interface” is recited within the “in response to” element, while the description of the first location is
Additionally, the repetition of the “displaying the second group chat interface in response to the first operation” raises a question of double inclusion, since claim 6 already incorporates “displaying the second group chat interface in response to the first operation” by reference.
Finally, the phrase “the any second user” should be amended to more clearly correspond to the second user who belongs to the identifier selected at the first location.
In sum, claim 6 should be amended as follows to resolve all informalities:
6. The chat interface creation method according to claim 1, wherein:
the first operation is a press operation for a first location and a second location in the first group chat interface,
the first location is a location [[ny]] second user in the first chat group within the first group chat interface,
the second location is any blank location in the group chat interface,
the duration of the press operation [[being]] is less than a first thresholdand
[[any]] second user.
Claim 12
The grammar of claim 12 is difficult to parse, for a few reasons.
(1) The claim language implies that the person termed the “any second user” is someone who both is in the first chat group, yet is not in the first chat group, because the claim language describes him as someone whose identifier is “in the first group chat interface,” yet also describes him as someone “other than the one or more second users in the first chat group.”
(2) It is unclear whether the person termed the “any second user” could include the first user (since the first user is someone other than the one or more second users whose identifier is displayed in the first chat group), and if so, it is further unclear whether the claim language allows the slide operation to pass through and end with the first user, based on the list of people who claim 12 allows to be included in the slide operation.
(3) The language of claim 12 appears to be inconsistent with the language of its parent claim 8 with respect to the sliding track passing through “locations of identifiers,” rather than simply passing through the identifiers themselves. Unlike claim 12, claim 8 does not use the extra “locations of” language.
(4) In general, the number of entities sharing the name “second user” makes the claim language difficult to follow, particularly when the term “second user” is invoked to refer back to earlier recitations thereof. See MPEP § 2173.05(e) (“if two different levers are recited earlier in the claim, the recitation of ‘said lever’ in the same or subsequent claim would be unclear where it is uncertain which of the two levers was intended.”). When read together with its parent claim 8, there are at least three classes of “second users,” and it is unclear if or how the classes of second users overlap (recall that in addition to the two classes of “second users” in claim 12, we also incorporate the “at least one second user” recited in claim 8, by reference).
Claim Rejections – 35 U.S.C. § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
I. Jung discloses claims 1–5, 8, 12–18, and 20.
Claims 1–5, 8, 12–18, and 20 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2015/0015514 A1 (“Jung”).
Claim 1
Jung discloses
A chat interface creation method, wherein the chat interface creation method is applied to an electronic device comprising a touchscreen, and the chat interface creation method comprises:
“FIG. 5 illustrates a method of a group communication in a portable terminal.” Jung ¶ 84. Specifically, the portable terminal that performs this method includes a “touch screen 140.” Jung ¶ 22 (referring to FIG. 1).
displaying a first group chat interface on the touchscreen, wherein the first group chat interface is a chat interface of a first chat group,
“Referring to FIG. 5, in step 501, the controller 150 controls the execution of a messenger corresponding to the selection of the user. In step 503, the controller 150 controls the display of group talk screen of the group selected from the user in the executed messenger.” Jung ¶ 85.
and the first chat group comprises a first user who enters the first group chat interface on the electronic device;
“As shown in FIG. 2A, the user performs a selective group communication with a specific some other users among other users for a talk belonging to a current group when the group talk screen 210 is displayed.” Jung ¶ 50. In other words, the group talk screen 210 displays a group communication between the user who selected the group in the executed messenger application, and a plurality of other users.
receiving a first operation in the first group chat interface;
“In step 505, the controller 150 detects whether a touch event for the selective group communication is inputted from the user when the group talk screen is displayed.” Jung ¶ 86.
Please note that while the touch event from step 505 is cited for the claimed “first operation” in this claim’s rejection, the claimed “first operation” is broad enough to further include several other touch events described in Jung’s disclosure, such as the touch events in step 509. See Jung ¶ 88. However, for the sake of simplicity and readability, this rejection will defer discussion of those additional touch events until they are affirmatively recited in the dependent claims. For example, dependent claims 3–5 read on the 513→519→519 branch of Jung’s method, whereas other dependent claims read on the 513→515 branch of Jung’s method.
and displaying a second group chat interface based on the first operation, wherein the second group chat interface is a chat interface of a second chat group, and both the second chat group and the first chat group comprise the first user.
As shown in FIG. 5, the touch event detected in step 505 directs the controller 150 to collect additional information from the user in steps 509–513 and/or 517, ultimately causing the controller 150 to perform either step 515 or step 519. Both steps 515 and 519 perform the same task of executing a specified type of group communication from among a plurality of potential types of group communication, one of which is a “group talk” function; the only difference between these two branches is the timing of when the user specifies the type of group communication desired. See Jung ¶¶ 91–93.
In any case, regardless of whether the portable terminal executes step 515 or 519, when either one of those steps decides to perform the “group talk” operation, “the controller 150 controls a corresponding operation so that the group talk with the other user selected by the user may be performed when the touch event is released. According to an embodiment, the controller 150 controls to execute a selective group talk while maintaining a current group talk screen without changing the screen.” Jung ¶ 42.
Claim 2
Jung discloses the chat interface creation method according to claim 1,
wherein the first operation is a press operation for a first location and a second location in the first group chat interface,
The touch events to trigger a selective group communication include a “touch event to the input message written by the user himself on the group talk screen” (i.e., the claimed second location), which is also “moved toward the other users’ location on the group talk screen 210 and is maintained in order to select the other users” (i.e., the claimed first location). Jung ¶¶ 86–88.
the first location is a location that is of an identifier of any second user in the first chat group and that is in the first group chat interface,
“As shown in FIG. 2B, in order to select a first other user (e.g., user C) for the selective group communication, the user moves (e.g., drag) the touch event (e.g., a long press) input to the first area 220 to a second area 230 in which a message written by the first other user (e.g., user C) is located.” Jung ¶ 52.
and in response to duration of the press operation being greater than a first threshold, the displaying a second group chat interface based on the first operation comprises: displaying a member selection interface in response to the first operation,
“When the touch event of the user for the selective group communication mode is detected, the controller 150 provides a visual effect to display the activation of the selective group communication mode.” Jung ¶ 87. “In the present invention, the touch input for starting (i.e., activating a group communication mode) a group communication includes various input forms that can be set by the user such as a long press input.” Jung ¶ 86.
wherein the member selection interface comprises an identifier of a member user in the first chat group;
As shown in FIGS. 2B and 2C, while in this state, the group talk screen 210 continues to display the other users A–C, while adding emphasis to areas of the screen 210 that represent the currently-selected members for a selective group communication.
receiving, in the member selection interface, a second operation for selecting at least one second user;
“In step 509, the controller 150 identifies the touch event for selecting other user from the user. For instance, when the touch event of the user is moved toward the other users' location on the group talk screen 210 and is maintained in order to select the other users (e.g., user C, user A), the controller 150 detects a corresponding interrupt as an input for the selection of the other user(s) for the group communication. A multi touch method can be used as a user input for selecting a relevant person for group communication in the group communication mode. For instance, when at least one other user (e.g., user A and/or user C) is selected by another input means such as a finger or a stylus when the touch event inputted to the message written by the user is maintained, the controller 150 detects a corresponding interrupt as an input for the selection of another user for the group communication.” Jung ¶ 88.
and displaying the second group chat interface based on the second operation, wherein the second chat group corresponding to the second group chat interface is a chat group comprising the first user and the at least one second user selected through the second operation.
“[T]he controller 150 controls a corresponding operation so that the group talk with the other user selected by the user may be performed when the touch event is released. According to an embodiment, the controller 150 controls to execute a selective group talk while maintaining a current group talk screen without changing the screen.” Jung ¶ 42.
Claim 3
Jung discloses the chat interface creation method according to claim 2,
wherein the member selection interface further comprises a first control,
“In step 517, when the preset group communication method is not the automatic execution method, that is, when the preset group communication method is the manual execution method, the controller 150 displays the group communication menu 250 for selection of group communication on the group talk screen 210.” Jung ¶ 92; see also Jung ¶ 59 (further describing the menu 250, which is illustrated in FIG. 2E).
the first control is used to indicate to create a chat task,
“In the present invention, the group communication menu 250 includes a menu” with “group talk” as one of the menu choices. Jung ¶ 92.
and the chat interface creation method further comprises: receiving a third operation for the first control in the member selection interface;
A “touch event is detected in the group communication menu 250 in step 517.” Jung ¶ 93.
and the displaying the second group chat interface based on the second operation comprises: displaying the second group chat interface in response to the second operation and the third operation.
“In step 519, when the touch event is detected in the group communication menu 250 in step 517, the controller 150 controls the operation corresponding to the menu in which the touch event is detected. For example, the controller 150 controls to perform the group talk with the other users selected by the user when the menu in which the touch event is detected is the group talk.” Jung ¶ 93.
Claim 4
Jung discloses the chat interface creation method according to claim 2,
wherein the member selection interface further comprises a second control,
“In step 517, when the preset group communication method is not the automatic execution method, that is, when the preset group communication method is the manual execution method, the controller 150 displays the group communication menu 250 for selection of group communication on the group talk screen 210.” Jung ¶ 92; see also Jung ¶ 59 (further describing the menu 250, which is illustrated in FIG. 2E).
the second control is used to indicate to create a chat task and a file sending task, a file type of a file comprises any file type in a document file, an image file, or a media file,
“In the present invention, the group communication menu 250 includes a menu such as . . . the group talk [or] the data (e.g., an image or a video) transmission.” Jung ¶ 92.
and the chat interface creation method further comprises: receiving a fourth operation for the second control in the member selection interface;
A “touch event is detected in the group communication menu 250 in step 517.” Jung ¶ 93.
and the displaying the second group chat interface based on the second operation comprises: displaying the second group chat interface in response to the second operation and the fourth operation, wherein the second group chat interface comprises the file.
“In step 519, when the touch event is detected in the group communication menu 250 in step 517, the controller 150 controls the operation corresponding to the menu in which the touch event is detected.” Jung ¶ 93. “If the selected menu is the data transmission, the controller 150 controls to transmit data to the other users selected by the user, . . . displays a gallery screen so as to transmit data, and controls to transmit data selected according to the touch event of the user to the other user selected by the user. The controller 150 performs the selective data transmission for at least one other user selected by the user, not the entire data transmission for all other users who are joining in the group talk.” Jung ¶ 102.
Claim 5
Jung discloses the chat interface creation method according to claim 2,
wherein the member selection interface further comprises a third control,
“In step 517, when the preset group communication method is not the automatic execution method, that is, when the preset group communication method is the manual execution method, the controller 150 displays the group communication menu 250 for selection of group communication on the group talk screen 210.” Jung ¶ 92; see also Jung ¶ 59 (further describing the menu 250, which is illustrated in FIG. 2E).
the third control is used to indicate to create a chat task and a video/audio call task,
“In the present invention, the group communication menu 250 includes a menu such as the group call [or] the group talk” options. Jung ¶ 92.
and the chat interface creation method further comprises: receiving a fifth operation for the third control in the member selection interface;
A “touch event is detected in the group communication menu 250 in step 517.” Jung ¶ 93.
and the displaying the second group chat interface based on the second operation comprises: displaying the second group chat interface in response to the second operation and the fifth operation, wherein the second group chat interface is an interface for initiating a video/audio call to the member user selected through the second operation.
“If the selected menu is the group call, the controller 150 connects the selective group call with the other users selected by the user,” rather than the entire group. Jung ¶ 99. “For example, as shown in FIG. 2G, the controller 150 displays a screen 270 indicating that the group call is being connected between the user (me), the first other user (e.g., user C), and the second other user (e.g., user A)). Accordingly, the user performs the selective group call with other users (e.g., user A and user C) selected by the user. Thus, the present invention can execute the selective group call with at least one other user selected by the user, instead of the entire group call with all other users who are joining in the group talk.” Jung ¶ 62.
Claim 8
Jung discloses the chat interface creation method according to claim 1,
wherein the first operation is a slide operation, a sliding track of the slide operation comprises an identifier of at least one second user,
“As shown in FIG. 2B, in order to select a first other user (e.g., user C) for the selective group communication, the user moves (e.g., drag) the touch event (e.g., a long press) input to the first area 220 to a second area 230 in which a message written by the first other user (e.g., user C) is located.” Jung ¶ 52.
and the displaying a second group chat interface based on the first operation comprises: displaying the second group chat interface in response to the first operation,
“In step 515, when the preset group communication method is the automatic execution method, the controller 150 controls to execute a specific group communication that is previously set by the user.” Jung ¶ 91.
wherein the second chat group corresponding to the second group chat interface is a chat group comprising the first user and the at least one second user.
“For example, as shown in FIG. 3B, the controller 150 displays group talk target information 325 on a talk input window 320 so as to perform the group talk with other users (e.g., A and C) selected by the user. The controller 150 displays the group talk target information 325 such as ‘To: A, C’ on the talk input window 320 so as to feedback to the other users A and C that the message according to the group talk is transmitted. The group talk target information 325 includes an item (e.g., an image, a text, or an emoticon) displayed on the talk input window 320 so as to support and feedback the message transmission for the group talk in the group to some other users selected by the user from among the current talk group (group talk screen 210).” Jung ¶ 67.
Claim 12
Jung discloses the chat interface creation method according to claim 8,
wherein the slide operation passes through at least one of one or more locations of identifiers of one or more second users in the first chat group or an identifier of the first user in the first group chat interface, and ends at a location that is in the first group chat interface and that is of an identifier of any second user other than the one or more second users in the first chat group;
“As shown in FIG. 2B, in order to select a first other user (e.g., user C) for the selective group communication, the user moves (e.g., drag) the touch event (e.g., a long press) input to the first area 220 to a second area 230 in which a message written by the first other user (e.g., user C) is located.” Jung ¶ 52. “As shown in FIG. 2C, the user may additionally select a second other user (e.g., user A) when the first other user (e.g., user C) is selected as the relevant person for the group communication. As described above, the user may continuously select the second other user (e.g., user A), and a third other user based on the method of selecting the first other user (e.g., user C) in the first area 220.” Jung ¶ 54.
and the at least one second user comprises the one or more second users and the any second user.
“In step 515, when the preset group communication method is the automatic execution method, the controller 150 controls to execute a specific group communication that is previously set by the user.” Jung ¶ 91.
Claim 13
Jung discloses the chat interface creation method according to claim 1,
wherein the second group chat interface comprises a chat record that is of a member user in the second chat group and that is in the first chat group.
“For example, as shown in FIG. 3B, the controller 150 displays group talk target information 325 on a talk input window 320 so as to perform the group talk with other users (e.g., A and C) selected by the user. The controller 150 displays the group talk target information 325 such as ‘To: A, C’ on the talk input window 320 so as to feedback to the other users A and C that the message according to the group talk is transmitted. The group talk target information 325 includes an item (e.g., an image, a text, or an emoticon) displayed on the talk input window 320 so as to support and feedback the message transmission for the group talk in the group to some other users selected by the user from among the current talk group (group talk screen 210).” Jung ¶ 67.
Claims 14–18
Claims 14–18 recite an electronic device with general-purpose computer components that performs the same method recited in corresponding claims 1–5. Therefore, claims 14–18 are rejected over the same findings and rationale as provided above for claims 1–5, which include findings that show the prior art implemented its method on the same electronic device.
Claim 20
Claim 20 recites a computer readable storage medium with the same instructions that the memory of claim 14 has stored thereon. Since the memory of claim 14 is a species of the broader genus of computer readable storage media recited in claim 20, claim 20 is rejected for the same reasons.
II. Chang discloses claims 1, 8–10, 13, 14, and 20.
Claims 1, 8–10, 13, 14, and 20 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2013/0069969 (“Chang”).
Claim 1
Chang discloses:
A chat interface creation method, wherein the chat interface creation method is applied to an electronic device comprising a touchscreen, and the chat interface creation method comprises:
“FIG. 5 is a flowchart illustrating a method of coupling chat windows in the mobile terminal 100 according to a first exemplary embodiment of the present invention. Further, FIGS. 6 to 10 illustrate a method of coupling chat windows of FIG. 5.” Chang ¶ 128. Among other things, the mobile terminal 100 that implements the method of FIG. 5 includes “a display module 151 which may be a touch screen.” Chang ¶ 56.
displaying a first group chat interface on the touchscreen, wherein the first group chat interface is a chat interface of a first chat group, and the first chat group comprises a first user who enters the first group chat interface on the electronic device;
“Referring to FIG. 5, the controller 180 controls the touch screen 151 to display a chat window for displaying a message transmitted and received as a group between the user of the mobile terminal 100 and a plurality of another parties based on the user's control input (S101).” Chang ¶ 129.
receiving a first operation in the first group chat interface;
Next, “a control input for requesting generation of a private chat area corresponding to one of the plurality of another parties while group chatting is received (S102).” Chang ¶ 130.
and displaying a second group chat interface based on the first operation,
In response to receiving the control input in S102, “the controller 180 controls the touch screen 151 to display a private chat area within a chat window (S103).” Chang ¶ 130.
wherein the second group chat interface is a chat interface of a second chat group,
“Here, the private chat area is an area for displaying a private message transmitted or received one-on-one between the user of the mobile terminal 100 and a specific another party.” Chang ¶ 131.
and both the second chat group and the first chat group comprise the first user.
Much like the private chat area displays private messages transmitted or received “between the user of the mobile terminal 100 and a specific another party,” the main “group chat area” displays “group message[s] transmitted or receiving as a group between the user of the mobile terminal 100 and the plurality of another parties.” Chang ¶ 131.
Claim 8
Chang discloses the chat interface creation method according to claim 1,
wherein the first operation is a slide operation, a sliding track of the slide operation comprises an identifier of at least one second user,
“At step S102, the control input for requesting generation of a private chat area within a group chat window may be received through various methods,” Chang ¶ 132, one of which includes, “when a message received from a specific another party among messages displayed within the group chat window is dragged to a message transmitted by the user of the mobile terminal 100.” Chang ¶ 136.
and the displaying a second group chat interface based on the first operation comprises: displaying the second group chat interface in response to the first operation,
“Thereafter, when a control input for requesting generation of a private chat area corresponding to one of the plurality of another parties while group chatting is received (S102), the controller 180 controls the touch screen 151 to display a private chat area within a chat window (S103).” Chang ¶ 130.
wherein the second chat group corresponding to the second group chat interface is a chat group comprising the first user and the at least one second user.
“Here, the private chat area is an area for displaying a private message transmitted or received one-on-one between the user of the mobile terminal 100 and a specific another party.” Chang ¶ 131. The “specific other party” in this case is the user who sent the message that was dragged to the user’s own message. Chang ¶¶ 136 and 159–160 (referring to FIGS. 9(a)–9(b)).
Claim 9
Chang discloses the chat interface creation method according to claim 8,
wherein the slide operation passes through a location of the identifier of the at least one second user in the first group chat interface, and ends at a location of an identifier of the first user in the first group chat interface.
“[W]hen a message received from a specific another party among messages displayed within the group chat window is dragged to a message transmitted by the user of the mobile terminal 100, the controller 180 may receive the control input for requesting generation of a private chat area for the specific another party within the group chat window.” Chang ¶ 136.
Claim 10
Chang discloses the chat interface creation method according to claim 9,
wherein the identifier of the at least one second user moves with sliding of the slide operation in the first group chat interface.
A s shown in FIG. 9(a), the dragging operation moves the entire icon that identifies BBB’s message GM1. Chang FIG. 9(a).
Claim 13
Chang discloses the chat interface creation method according to claim 1,
wherein the second group chat interface comprises a chat record that is of a member user in the second chat group and that is in the first chat group.
“According to the present invention, when the private chat area A2 is generated, the group message GM1 and the group message GM2 may be displayed within the private chat area A2.” Chang ¶ 162.
Claim 14
Claim 14 recites an electronic device with general-purpose computer components that performs the same method recited in corresponding claims 1. Therefore, claim 14 is rejected over the same findings and rationale as provided above for claim 1, which include findings that show the prior art implemented its method on the same electronic device.
Claim 20
Claim 20 recites a computer readable storage medium with the same instructions that the memory of claim 14 has stored thereon. Since the memory of claim 14 is a species of the broader genus of computer readable storage media recited in claim 20, claim 20 is rejected for the same reasons.
III. Yoon discloses claims 1, 8, 13, 14, and 20.
Claims 1, 8, 13, 14, and 20 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2014/0068468 (“Yoon”).
Claim 1
Yoon discloses:
A chat interface creation method,
“FIG. 4 is a diagram illustrating an example of a user interface to explain a method for adding a participant to a subgroup of a conversation group.” Yoon ¶ 62.
wherein the chat interface creation method is applied to an electronic device comprising a touchscreen, and the chat interface creation method comprises:
As shown in FIG. 3, the method of FIG. 4 is performed by a device with an input unit, determining unit, processing unit, and message DB. See Yoon ¶¶ 55–56 and FIG. 3. “The user input 110 may include or may be associated with various types of input interfaces, such as a touch input display.” Yoon ¶ 35.
displaying a first group chat interface on the touchscreen, wherein the first group chat interface is a chat interface of a first chat group,
As shown in FIG. 4 (and also in FIG. 2A, for example) the device provides for “an area in which a group conversation window is displayed, and may display the group conversation speech bubbles 220.” Yoon ¶ 47 (note that the speech bubbles are labeled as 421–425 in FIG. 4).
and the first chat group comprises a first user who enters the first group chat interface on the electronic device;
“Each of the group conversation speech bubble 220 may present a message delivered by each of participants of the group conversation.” Yoon ¶ 48.
receiving a first operation in the first group chat interface;
“Referring to FIG. 4, the method for adding a participant to a subgroup of a conversation group includes selecting a specific participant's speech bubble from among speech bubbles displayed in a first area 410 and dragging the selected specific speech bubble to a second area 440 in operation 401.” Yoon ¶ 63.
and displaying a second group chat interface based on the first operation, wherein the second group chat interface is a chat interface of a second chat group, and both the second chat group and the first chat group comprise the first user.
Responsive to a subsequent dragging input in operation 601 (FIG. 6), “subgroup participants exchange messages on the newly-generated conversation window for the subgroup in operation 604. That is, the subgroup participants may exchange messages on the subgroup conversation window of the second area 640 as well as the group conversation window of the first area 610.” Yoon ¶ 77. Displaying this new subgroup conversation window is “based on the first operation” (i.e., operation 401 mentioned above) because it comprises the participant(s) who was/were dragged to the second area 440 in operation 401.
Claim 8
Yoon discloses the chat interface creation method according to claim 1,
wherein the first operation is a slide operation, a sliding track of the slide operation comprises an identifier of at least one second user,
“Referring to FIG. 4, the method for adding a participant to a subgroup of a conversation group includes selecting a specific participant's speech bubble from among speech bubbles displayed in a first area 410 and dragging the selected specific speech bubble to a second area 440 in operation 401.” Yoon ¶ 63.
and the displaying a second group chat interface based on the first operation comprises: displaying the second group chat interface in response to the first operation, wherein the second chat group corresponding to the second group chat interface is a chat group comprising the first user and the at least one second user.
Responsive to a subsequent dragging input in operation 601 (FIG. 6), “subgroup participants exchange messages on the newly-generated conversation window for the subgroup in operation 604. That is, the subgroup participants may exchange messages on the subgroup conversation window of the second area 640 as well as the group conversation window of the first area 610.” Yoon ¶ 77. Displaying this new subgroup conversation window is “in response to the first operation” (i.e., operation 401 mentioned above) because Yoon’s second window is displayed in response to both operation 401 and operation 601, where the additional element of operation 601 falls within the open-ended “comprising” scope of the claim language. See MPEP § 2111.03.
Claim 13
Yoon discloses the chat interface creation method according to claim 1,
wherein the second group chat interface comprises a chat record that is of a member user in the second chat group and that is in the first chat group.
“[S]ubgroup participants exchange messages on the newly-generated conversation window for the subgroup in operation 604. That is, the subgroup participants may exchange messages on the subgroup conversation window of the second area 640 as well as the group conversation window of the first area 610.” Yoon ¶ 77.
Claim 14
Claim 14 recites an electronic device with general-purpose computer components that performs the same method recited in corresponding claim 1. Therefore, claim 14 is rejected over the same findings and rationale as provided above for claim 1, which include findings that show the prior art implemented its method on the same electronic device.
Claim 20
Claim 20 recites a computer readable storage medium with the same instructions that the memory of claim 14 has stored thereon. Since the memory of claim 14 is a species of the broader genus of computer readable storage media recited in claim 20, claim 20 is rejected for the same reasons.
Claim Rejections – 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
I. Yoon and Westerman teach claims 6 and 19.
Claims 6 and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over Yoon as applied to claims 1 and 14 above, and further in view of U.S. Patent Application Publication No. 2008/0168403 A1 (“Westerman”).
Claim 6
Yoon teaches the chat interface creation method according to claim 1,
wherein the first operation is a press operation for a first location and a second location in the first group chat interface, the first location is a location that is of an identifier of any second user in the first chat group and that is in the first group chat interface,
“Referring to FIG. 4, the method for adding a participant to a subgroup of a conversation group includes selecting a specific participant's speech bubble from among speech bubbles displayed in a first area 410 and dragging the selected specific speech bubble” in operation 401. Yoon ¶ 63.
and in response to
If the dragging ends by “dragging the selected specific speech bubble to a second area 440,” then operation 401 ends by selecting that participant for a separate chat. Yoon ¶ 63. FIG. 4 further teaches that the second area 440 is blank during operation 401. See Yoon FIG. 4.
wherein the second chat group corresponding to the second group chat interface is a two-person chat group comprising the first user and the any second user.
Responsive to a subsequent dragging input in operation 601 (FIG. 6), “subgroup participants exchange messages on the newly-generated conversation window for the subgroup in operation 604. That is, the subgroup participants may exchange messages on the subgroup conversation window of the second area 640 as well as the group conversation window of the first area 610.” Yoon ¶ 77. Displaying this new subgroup conversation window is “in response to the first operation” (i.e., operation 401 mentioned above) because Yoon’s second window is displayed in response to both operation 401 and operation 601, where the additional element of operation 601 falls within the open-ended “comprising” scope of the claim language. See MPEP § 2111.03.
Yoon does not appear to explicitly disclose testing whether the duration of operation 401 is “less than a first threshold.”
Westerman, however, teaches a technique for recognizing gestures, whereby “[a] time component can optionally be attached” to the gesture, so that the entire gesture “can be required to be completed within a certain period of time (e.g. a few seconds), otherwise the gesture will be rejected.” Westerman ¶ 153.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to improve Yoon’s operation 401 with Westerman’s requirement that the gesture in operation 401 be completed within a certain period of time. One would have been motivated to improve Yoon with Westerman’s requirement, because Westerman’s requirement helps discern the user’s intent, by having the user demonstrate confidence in the input (i.e., a user who is unsure of the operation would take longer to perform the gesture).
Claim 19
Claim 19 recites an electronic device with general-purpose computer components that performs the same method recited in corresponding claim 6. Therefore, claim 19 is rejected over the same findings and rationale as provided above for claim 6, which include findings that show the prior art implemented its method on the same electronic device.
II. Jung and Jiang teach claim 7.
Claim 7 is rejected under 35 U.S.C. § 103 as being unpatentable over Jung as applied to claim 1 above, and further in view of U.S. Patent Application Publication No. 2017/0109011 A1 (“Jiang”).
Claim 7
Jung teaches the chat interface creation method according to claim 1,
wherein the first operation is a press operation for any two
The touch events to trigger a selective group communication include a “touch event to the input message written by the user himself on the group talk screen” (i.e., the claimed second location), which is also “moved toward the other users’ location on the group talk screen 210 and is maintained in order to select the other users” (i.e., the claimed first location). Jung ¶¶ 86–88.
and the displaying a second group chat interface based on the first operation comprises: displaying a member selection interface in response to the first operation, wherein the member selection interface comprises an identifier of a member user in the first chat group;
“When the touch event of the user for the selective group communication mode is detected, the controller 150 provides a visual effect to display the activation of the selective group communication mode.” Jung ¶ 87.
receiving, in the member selection interface, a second operation for selecting at least one second user;
“In step 509, the controller 150 identifies the touch event for selecting other user from the user. For instance, when the touch event of the user is moved toward the other users' location on the group talk screen 210 and is maintained in order to select the other users (e.g., user C, user A), the controller 150 detects a corresponding interrupt as an input for the selection of the other user(s) for the group communication. A multi touch method can be used as a user input for selecting a relevant person for group communication in the group communication mode. For instance, when at least one other user (e.g., user A and/or user C) is selected by another input means such as a finger or a stylus when the touch event inputted to the message written by the user is maintained, the controller 150 detects a corresponding interrupt as an input for the selection of another user for the group communication.” Jung ¶ 88.
and displaying the second group chat interface based on the second operation, wherein the second chat group corresponding to the second group chat interface is a group comprising the first user and the at least one second user selected through the second operation.
“[T]he controller 150 controls a corresponding operation so that the group talk with the other user selected by the user may be performed when the touch event is released. According to an embodiment, the controller 150 controls to execute a selective group talk while maintaining a current group talk screen without changing the screen.” Jung ¶ 42.
The only difference between Jung and the claimed invention is the substitution of Jung’s “touch event to the input message written by the user himself on the group talk screen” with a “press operation for any two blank locations in the first group chat interface.”
However, the substituted component, and its function of activating a second interface, was known in the art prior to the effective filing date of the claimed invention. Specifically, Jiang teaches a touchscreen gesture in which, “two fingers double-clicking the blank space[] causes the appearance of related application icons for the user to choose from.” Jiang ¶ 247.
Furthermore, one of ordinary skill in the art could have substituted one known element for another, and the results of the substitution would have been predictable, because the substitution does not involve any alteration to the touchscreen in any way; the skilled artisan simply programs the software in Jung’s device to recognize the gesture from Jiang’s disclosure.
In view of all of the above findings, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute Jung’s known touch event gesture with Jiang’s two finger double click of a blank space.
III. Jung and Yoon teach claim 11.
Claim 11 is rejected under 35 U.S.C. § 103 as being unpatentable over Jung as applied to claim 8 above, and further in view of Yoon.
Claim 11
Jung teaches the chat interface creation method according to claim 8,
wherein the slide operation passes through locations of the identifier of the at least one second user and an identifier of the first user in the first group chat interface, and ends
“As shown in FIG. 2B, in order to select a first other user (e.g., user C) for the selective group communication, the user moves (e.g., drag) the touch event (e.g., a long press) input to the first area 220 to a second area 230 in which a message written by the first other user (e.g., user C) is located.” Jung ¶ 52. Importantly, the first area 220 is an “in which a message written and entered by the user exists on the group talk screen 210.” Jung ¶ 50.
Jung does not appear to explicitly disclose ending its touch event at any blank location in the first group chat interface.
Yoon, however, also teaches the chat interface creation method according to claims 1 and 8 (per the 35 U.S.C. § 102 rejection above), and further teaches:
the slide operation passes through locations of the identifier of the at least one second user
“Referring to FIG. 4, the method for adding a participant to a subgroup of a conversation group includes selecting a specific participant's speech bubble from among speech bubbles displayed in a first area 410 and dragging the selected specific speech bubble” in operation 401. Yoon ¶ 63.
and ends at any blank location in the first group chat interface.
Operation 401 concludes after “dragging the selected specific speech bubble to a second area 440.” Yoon ¶ 63. FIG. 4 further teaches that the second area 440 is blank during operation 401. See Yoon FIG. 4.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the gesture from Jung with the gesture from Yoon, according to their known methods of implementing a complete gesture. As the MPEP explains, a claim is obvious when “all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art.” MPEP § 2143 (subsection (I.)(A.)) (citing KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 416 (2007)).
In this case, consistent with the guidance in MPEP § 2143 (subsection (I.)(A.)), there is at least a preponderance of evidence to establish the following findings of fact:
(1) The prior art included each element claimed, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. The evidence for this finding includes the mapping, provided above, of each element of claim 11 (including the elements it incorporates by reference from parent claims 1 and 8) to a corresponding portion of Jung’s disclosure, and likewise, the additional mappings provided above of each element of claims 1, 8, and 11 to corresponding portions of Yoon’s disclosure.
(2) One of ordinary skill in the art could have combined the elements as claimed by known methods, and in combination, each element merely performs the same function as it does separately. Evidence of the known method for combining the elements includes Yoon’s disclosure of a determination unit 120 that recognizes different types of touch inputs, and reconciles them with chat objects on the screen, together with a processing unit 130 that utilizes those determinations reported by the determination unit 120 to decide which participants in a group chat are ultimately selected by the touch gesture. In other words, since Yoon directs the skilled artisan to program determination unit 120 and processing unit 130 to recognize the gestures shown in FIGS. 4 and 6, and since the only difference between that and the claimed
The evidence for each element performing the same function as it does in combination lies with the mapping of each claim element to each respective portion of the references:
The function of Jung’s touch event starting with the user’s own message is to instruct the software add the author of that message (i.e., the person drawing the gesture) to the new group chat that will be formed. Likewise, in the combined gesture, the function of Jung’s touch event starting with the user’s own message is that it selects that user as one of the participants to the new group chat.
The function of Jung’s touch event passing through the first other user’s message is to instruct the software to add the first other user to the new group chat that will be formed. Likewise, in the combined gesture, the function of Jung’s touch event passing through the first other user’s message is to instruct the software to add the first other user to the new group chat that will be formed.
The function of Yoon’s touch event passing through a specific participant's speech bubble from among speech bubbles displayed in a first area 410 is to instruct the software to add the specific participant to the new group chat that will be formed. Likewise, in the combined gesture, the function of Yoon’s touch event passing through a specific participant's speech bubble from among speech bubbles displayed in a first area 410 is to instruct the software to add the specific participant to the new group chat that will be formed.
The function of Yoon’s touch event ending at the second area 440 is that it concludes the operation of selecting a user to be added