DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In light of the amendments, the claims are rejected under 35 U.S.C. 101.
In light of the amendments, the claims are rejected under 35 U.S.C. 103.
Notice to Applicant
In the amendment dated 10/24/2025, the following has occurred: claims 1-10 and 12-20 have been amended; claim 11 remains unchanged; and no new claims have been added.
Claims 1-20 are pending.
Effective Filing Date: 09/29/2021
Response to Arguments
35 U.S.C. 101 Rejections:
Applicant argues that the claims do not include an abstract idea and that the claims recite an electronic device presenting a visual object to a user for guiding an activity. The device identifies data and presents an object and that is being considered as a technical improvement to a technology by Applicant. Examiner however respectfully disagrees. The claims recite displaying information using a computing device in an “apply it” manner.
Applicant also argues that the displaying of this information in response to an input for activating the display is part of this improvement. Applicant highlights that there is a benefit of ensuring that the visual object is present with the most efficiency. Examiner however respectfully disagrees. First, the specification does not discuss this as being part of the technical solution to the technical problem. Furthermore, the claims reflect that the processing of information is occurring upon an input for activation of the display (which could be referring to pressing a button on a display, turning the display on from an off state, etc.). One could argue that it is less efficient to process when the display is on rather than doing it when the display is off. Additionally, if the object is present in essentially real-time when the display is turned on what is the benefit when compared to a device that has the object already displayed when the display is off? The decision of when to process and display the data seems moreover a design choice but it is different, hence it speaks to an improvement but an improvement to the abstract idea of displaying information (see MPEP 2106.05(a) Part II, Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48).
Applicant also discusses that the claimed invention as a whole integrates the abstract idea into a practical application. Examiner however respectfully disagrees as the claim limitations do not recite additional elements which would integrate with the abstract idea to form a practical application.
35 U.S.C. 103 Rejections:
Applicant argues with respect to the newly amended claim language and states that the previous references do not teach them. Examiner however respectfully disagrees and directs Applicant to the updated 103 rejection section which relies on the Cho et al. reference to teach the newly-added limitations. The limitation “an input for activating the display” is broad and may not be limited to what Applicant might intend it’s meaning to be.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-13 are drawn to a device, claims 14-18 are drawn to a method, and claims 19-20 are drawn to a media, each of which is within the four statutory categories. Claims 1-20 are further directed to an abstract idea on the grounds set out in detail below. As discussed below, the claims do not include additional elements that are sufficient to amount to significantly more than the abstract idea because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea (Step 1: YES).
Step 2A:
Prong One:
Claim 1 recites a) an electronic device comprising:
a1) a display,
a2) at least one sensor,
a3) memory storing one or more computer programs, and
a4) one or more processors comprising processing circuitry, wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the electronic device to:
1) in response to an input for activating the a1) display, identify first data obtained in a first time interval, wherein the first time interval is set based on a time when the a1) display is activated,
2) based on the first data, identify a first activity recommended to a user of the a) electronic device,
3) display a first visual object for guiding the first activity in a screen displayed in response to the input for activating the a1) display,
4) after a second time interval from a time when the first visual object is displayed, remove, from the screen, the first visual object,
5) identify that second data, obtained in a third time interval after the first time interval, is distinct from the first data, and
6) based on identifying that the second data is distinct from the first data, display a second visual object for guiding a second activity distinct from the first activity in a screen displayed in response to another input for activating the a1) display,
wherein the first visual object is usable for executing b) a first application related to the first activity, and
wherein the second visual object is usable for executing c) a second application related to the second activity, distinct from the b) first application.
Claim 1 recites, in part, performing the steps of 1) in response to an input for activating, identify first data obtained in a first time interval, wherein the first time interval is set based on a time when something is activated, 2) based on the first data, identify a first activity recommended to a user, 3) display a first visual object for guiding the first activity in a screen displayed in response to the input for activating, 4) after a second time interval from a time when the first visual object is displayed, remove, from the screen, the first visual object, 5) identify that second data, obtained in a third time interval after the first time interval, is distinct from the first data, and 6) based on identifying that the second data is distinct from the first data, display a second visual object for guiding a second activity distinct from the first activity in a screen displayed in response to another input for activating, wherein the first visual object is usable for executing the first activity, and wherein the second visual object is usable for executing the second activity, distinct from the first.. These steps correspond to Certain Methods of Organizing Human Activity, more particularly, managing personal behavior or relationships or interactions between people (including following rules or instructions). For example, the claim describes guidance of activities for an individual to perform which can be something humans can do. Independent claims 1, 14, and 19 recite similar limitations and are also directed to an abstract idea under the same analysis.
Depending claims 2-13, 15-18, and 20 include all of the limitations of claims 1, 14, and 19, and therefore likewise incorporate the above described abstract idea. Depending claims 2, 15, and 20 add the additional step of “after the first time interval elapses, receive, using the communication circuitry, based on the first data, third data related to a body condition of the user obtained from an external electronic device connected with the electronic device”; claims 3 and 16 add the additional step of “after the first time interval elapses, based on the first data, identify, according to a first response received from the user that is associated with a first inquiry for a first body condition of the user, third data related to the first body condition of the user”; claims 4 and 17 add the additional step of “after the third time interval elapses, based on the second data, identify, according to a second response received from the user that is associated with a second inquiry for a second body condition of the user, fourth data related to the second body condition of the user”; claims 5 and 18 add the additional steps of “identify that first body condition information of the user obtained based on the second data and the fourth data is distinct from second body condition information obtained based on the first data and the third data” and “in response to identifying that the first body condition information is distinct from the second body condition information, display, based on the second data, the second visual object for guiding the second activity”; claim 7 adds the additional steps of “identify at least one of data related to a movement of the user, data related to walking of the user, or data related to a first sleep state of the user as the physical data of the user” and “identify at least one of data related to a stress level of the user or data related to a second sleep state as the psychological data of the user”; claim 8 adds the additional steps of “identify, based on the physical data of the user and the psychological data of the user, one of a first type, a second type, a third type, or a fourth type” and “identify one of a plurality of activities related to the identified one of the first type, the second type, the third type, or the fourth type as the first activity”; claim 9 adds the additional steps of “identify an input on the first visual object” and “in response to the input on the first visual object, transmit a signal for executing the first application to an external electronic device connected with the electronic device, and wherein the first application is executed in the external electronic device based on the signal”; claim 10 adds the additional steps of “identify a designated user input on the screen displayed in response to the input for activating the display” and “in response to the designated user input, display another screen comprising a plurality of executable objects for executing a plurality of applications respectively, switched from the screen, wherein a first executable object for executing the first application among the plurality of executable objects is highlighted relative to remaining executable objects among the plurality of executable objects”; claim 12 adds the additional step of “in response to the input for activating the display, after identifying the first activity based on the first data, display, before displaying the screen, another screen for indicating that the first visual object has been added”; and claim 13 adds the additional steps of “identify a user input on the first visual object which is usable for executing the first application”, “identify, based on the user input on the first visual object, a call record stored in the memory”, and “display, based on the call record, a notification for suggesting a call destination”. Additionally, the limitations of depending claims 6 and 11 further specify elements from the claims from which they depend on without adding any additional steps. These additional limitations only further serve to limit the abstract idea. Thus, depending claims 2-13, 15-18, and 20 are nonetheless directed towards fundamentally the same abstract idea as independent claims 1, 14, and 19 (Step 2A (Prong One): YES).
Prong Two:
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of – using a) an electronic device comprising: a1) a display, a2) at least one sensor, a3) memory storing one or more computer programs, and a4) one or more processors communicatively coupled to the display, the at least one sensor and the memory, wherein the one or more computer programs include computer-executable instructions, b) a first application, c) a second application, and d) a communication circuit (from Claim 2) to perform the claimed steps.
The a) electronic device comprising: a1) a display, a3) memory storing one or more computer programs, and a4) one or more processors communicatively coupled to the display, the at least one sensor and the memory, wherein the one or more computer programs include computer-executable instructions and d) communication circuit in these steps are recited at a high-level of generality (i.e., as generic components performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using generic computer components (see: Applicant’s specification, paragraphs [0042] and [0045] where there are generic computer components listed, see MPEP 2106.05(f)).
The a2) at least one sensor in these steps adds insignificant extra-solution activity to the abstract idea which amounts to mere data gathering, see MPEP 2106.05(g).
Finally, the b) first application and c) second application in these steps generally links the abstract idea to a particular technological environment or field of use (such as computing, see MPEP 2106.05(h)).
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea (Step 2A (Prong Two): NO).
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a) an electronic device comprising: a1) a display, a2) at least one sensor, a3) memory storing one or more computer programs, and a4) one or more processors communicatively coupled to the display, the at least one sensor and the memory, wherein the one or more computer programs include computer-executable instructions, b) a first application, c) a second application, and d) a communication circuit to perform the claimed steps amounts to no more than insignificant extra-solution activity in the form of WURC activity (well-understood, routine, and conventional activity), a general linking to a particular technological field, and mere instructions to apply the exception using a generic computer component that does not offer “significantly more” than the abstract idea itself because the claims do not recite an improvement to another technology or technical field, an improvement to the functioning of any computer itself, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment. It should be noted that the claims do not include additional elements that amount to significantly more than the judicial exception because the Specification recites mere generic computer components, as discussed above that are being used to apply certain method steps of organizing human activity. Specifically, MPEP 2106.05(d), MPEP 2106.05(f), and MPEP 2106.05(h) recite that the following limitations are not significantly more:
Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d));
Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); and
Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h)).
The current invention generates displays data on a display utilizing a) an electronic device comprising: a1) a display, a3) memory storing one or more computer programs, and a4) one or more processors communicatively coupled to the display, the at least one sensor and the memory, wherein the one or more computer programs include computer-executable instructions and d) a communication circuit, thus these computing devices are adding the words “apply it” with mere instructions to implement the abstract idea on a computer.
Additionally, the a2) at least one sensor in these steps add insignificant extra-solution activity/pre-solution activity in the form of WURC activity to the abstract idea. The following is an example of a court decision demonstrating computer functions as well-understood, routine and conventional activities, e.g. see MPEP 2106.05(d)(II): Receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec – similarly, the current invention receives sensor data, and transmits the data to a device over a network, for example the Internet.
Finally, the b) first application and c) second application generally links the abstract idea to a particular technological environment or field of use. The following represent an example that courts have identified as generally linking the abstract idea to a particular technological environment (e.g. see MPEP 2106.05(h)): Limiting the abstract idea data to applications, because limiting application of the abstract idea to computer applications is simply an attempt to limit the use of the abstract idea to a particular technological environment, e.g. see Electric Power Group, LLC v. Alstom S.A.
Mere instructions to apply an exception using generic computer components, a general linking to a particular technological field, or insignificant extra-solution activity in the form of WURC activity cannot provide an inventive concept. The claims are not patent eligible (Step 2B: NO).
Claims 1-20 are therefore rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2017/0046503 to Cho et al. in view of U.S. 2017/0011210 to Cheong et al.
As per claim 1, Cho et al. teaches an electronic device (see: FIG. 2 where there is an electronic device 201) comprising:
--a display; (FIG. 2 where there is a display 260)
--at least one sensor; (see: FIG. 2 where there is at least one sensor 280)
--memory storing one or more computer programs; (see: FIG. 2 where there is a memory 230) and
--one or more processors comprising processing circuitry, (see: FIG. 2 where there is such a processor 220)
--wherein the one or more computer programs include computer-executable instructions (see: paragraph [0012] where there are such instructions) that, when executed by the one or more processors individually or collectively, cause the electronic device to:
--in response to an input for activating the display, identify first data obtained in a first time interval, wherein the first time interval is set based on a time when the display is activated, (see: paragraph [0197] where in response to input information to start exercise is detected, exercise start information is being transmitted to the second device. A determination is also being made as to how many times to perform an exercise. The determination of the number of times is the setting of the first time interval and when the exercise starts this is the start point of the first time interval. Also see: paragraph [0199] where there is collection of activity information according to an exercise. Thus, in response to input indicating the start of the exercise, first data of activity information is being obtained for the time interval of however long the activity is for)
--based on the first data, identify a first activity recommended to a user of the electronic device, (see: paragraph [0057] where there is identification of a recommended first activity (an updated exercise guide) in view of the collected activity information (first data))
--display a first visual object for guiding the first activity in a screen displayed in response to the input for activating the display, (see: paragraph [0076] where there is a display of the input/output interface which outputs the exercise guide information. The output on a display is the first visual object for guiding the activity as explained in paragraph [0141])
--after a second time interval from a time when the first visual object is displayed, remove, from the screen, the first visual object, (see: paragraphs [0186] and [0203] where there is termination of the exercise and displaying of the result of the exercise up until a time of a forced shutdown)
--identify that second data, obtained in a third time interval after the first time interval, is distinct from the first data, (see: FIG. 17 and paragraph [0173] where there is a determination/identification of a second data (heart rate exceeding a reference value) in a third time period (break period) and it is distinct from the first data (the initial collected activity information)) and
--based on identifying that the second data is distinct from the first data, display a second visual object for guiding a second activity distinct from the first activity in a screen displayed in response to another input for activating the display (see: paragraph [0173] where there is an updating of activity schedule information based on the difference in heart rate data (different between the first and second data). The updated activity is one to be performed and the displaying of this information (which is done in a similar manner to the first object above) is a displaying of a second visual object).
Cho et al. may not further, specifically teach:
1) --wherein the first visual object is usable for executing a first application related to the first activity, and
2) --wherein the second visual object is usable for executing a second application related to the second activity, distinct from the first application.
Cheong et al. teaches:
1) --wherein the first visual object is usable for executing a first application related to the first activity, (see: paragraph [1693] and [2082] where there is a first application which is being displayed related to an activity. A menu item is being displayed and it’s related to the activity) and
2) --wherein the second visual object is usable for executing a second application related to the second activity, distinct from the first application (see: paragraph [1693] and [2082] which is being displayed related to an activity. The applications here are distinct from one another as the menu items are distinct from each other).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute 1) wherein the first visual object is usable for executing a first application related to the first activity and 2) wherein the second visual object is usable for executing a second application related to the second activity, distinct from the first application as taught by Cheong et al. for the visual objects as disclosed by Cho et al. since each individual element and its function are shown in the prior art, with the difference being the substitution of the elements. In the present case, Cho et al. already teaches of displaying objects thus one could swap the object with an application and obtain predictable results of displaying objects in response to detecting activities. Thus, one of ordinary skill in the art could have substituted the one known element for the other to produce a predictable result (MPEP 2143).
As per claim 2, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cho et al. further teaches:
--communication circuitry (see: paragraph [0151] where there is a communication interface).
Cheong et al. further teaches:
--wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--after the first time interval elapses, receive, using the communication circuitry, based on the first data, third data related to a body condition of the user obtained from an external electronic device connected with the electronic device (see: paragraph [0655] where the latest body condition of the user will be continuously updated (received) on the electronical device. This can be after a first time interval of a determination of the activity of the patient).
The motivations to combine the above-mentioned references are discussed in the rejection of claim 1, and incorporated herein.
As per claim 3, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cheong et al. further teaches wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--after the first time interval elapses, based on the first data, identify, according to a first response received from the user that is associated with a first inquiry for a first body condition of the user, third data related to the first body condition of the user (see: paragraph [0655] where the latest body condition of the user will be continuously updated (identified) on the electronical device. This can be after a first time interval of a determination of the activity of the patient).
The motivations to combine the above-mentioned references are discussed in the rejection of claim 1, and incorporated herein.
As per claim 4, Cho et al. and Cheong et al. in combination teaches the device of claim 3, see discussion of claim 3. Cheong et al. further teaches wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--after the third time interval elapses, based on the second data, identify, according to a second response received from the user that is associated with a second inquiry for a second body condition of the user, fourth data related to the second body condition of the user (see: paragraph [0655] where the latest body condition of the user will be continuously updated (identified) on the electronical device. This can be after a third time interval of a determination of the activity of the patient. This can repeat over and over as the body condition is being determined continuously and the activities are being identified continuously).
The motivations to combine the above-mentioned references are discussed in the rejection of claim 1, and incorporated herein.
As per claim 5, Cho et al. and Cheong et al. in combination teaches the device of claim 4, see discussion of claim 4. Cho et al. further teaches wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--identify that first body condition information of the user obtained based on the second data and the fourth data is distinct from second body condition information obtained based on the first data and the third data, (see: paragraph [0655] where the latest body condition of the user will be continuously updated (identified) on the electronical device. This can be after a first time interval of a determination of the activity of the patient. Also see: Table 2 where there is an activity schedule and paragraph [0155] where there is a determination that an event has occurred. Therefore, there is an identification of a distinct event/activity which is different from another one) and
--in response to identifying that the first body condition information is distinct from the second body condition information, display the second visual object for guiding the second activity (see: paragraph [0156] where an input interface is displayed in response to the detection of a different event/condition).
As per claim 6, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cho et al. further teaches wherein each of the first data and the second data comprises physical data of the user and psychological data of the user (see: paragraph [0075] where there is both physiological data for a user based on their activity and physical data of the user of the activity they are performing).
As per claim 7, Cho et al. and Cheong et al. in combination teaches the device of claim 6, see discussion of claim 6. Cho et al. further teaches wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--identify at least one of data related to a movement of the user, data related to walking of the user, or data related to a first sleep state of the user as the physical data of the user (see: paragraph [0138] and Table 2 where there is activity schedule information of walking. Also see: paragraph [0155] where there is a determination that an activity from the activity schedule has occurred. There is an identification that walking has occurred here).
Cheong et al. further teaches:
--identify at least one of data related to a stress level of the user or data related to a second sleep state as the psychological data of the user (see: paragraph [0454] where there is stress information which is being gathered by the sensor. Also see: paragraph [0458] where a stress states is being determined. A stress state is being identified using the stress information here).
The motivations to combine the above-mentioned references are discussed in the rejection of claim 1, and incorporated herein.
As per claim 8, Cho et al. and Cheong et al. in combination teaches the device of claim 7, see discussion of claim 7. Cho et al. further teaches wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--identify, based on the physical data of the user and the psychological data of the user, one of a first type, a second type, a third type, or a fourth type, (see: Table 2 and paragraph [0157] where there is a schedule and there are exercise types. Different types of activities are being identified based on the schedule using the physical and physiological data to identify the activity as explained in paragraph [0155]) and
--identify one of a plurality of activities related to the identified one of the first type, the second type, the third type, or the fourth type as the first activity (see: paragraph [0155] where there is identification of the activity related to the types shown in FIG. 2).
As per claim 9, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cho et al. further teaches:
--communication circuitry (see: paragraph [0151] where there is a communication interface).
Cheong et al. further teaches:
--wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--identify an input on the first visual object, (see: paragraph [0667] where there is an identification of a user on a content/first visual object) and
--in response to the input on the first visual object, transmit a signal for executing the first application to an external electronic device connected with the electronic device, (see: paragraph [0667] where there is a transmission of a signal to a communication device in response to user input/selection of content) and
--wherein the first application is executed in the external electronic device based on the signal (see: paragraph [1065] where there is playing of a content after selection. Thus, the first application is being executed based on a signal).
The motivations to combine the above-mentioned references are discussed in the rejection of claim 1, and incorporated herein.
As per claim 10, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cho et al. further teaches wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--identify a designated user input on the screen displayed in response to the input for activating the display, (see: paragraph [0086] where there is a sleep state and therefore there is an on states. Also see: paragraph [0151] where there is input (selection of a superimposed icon). There is an identification of input while the device is on)
--in response to the designated user input, display another screen comprising a plurality of executable objects for executing a plurality of applications respectively, switched from the screen, (see: FIG. 16C where there is a display of objects in response to input) and
--wherein a first executable object for executing the first application among the plurality of executable objects is highlighted relative to remaining executable objects among the plurality of executable objects (see: FIG. 16C where there is a highlighting of an object relative to another object).
As per claim 11, Cho et al. and Cheong et al. in combination teaches the device of claim 10, see discussion of claim 10. Cho et al. further teaches wherein the first executable object is highlighted relative to the remaining executable objects among the plurality of executable objects by displaying a visual element having a designated color along at least one edge of the first executable object (see: paragraph [0195] where there is coloring when an object is selected/highlighted).
As per claim 14, claim 14 is similar to claim 1 and is therefore rejected in a similar manner.
As per claim 15, claim 15 is similar to claim 2 and is therefore rejected in a similar manner.
As per claim 16, claim 16 is similar to claim 3 and is therefore rejected in a similar manner.
As per claim 17, claim 17 is similar to claim 4 and is therefore rejected in a similar manner.
As per claim 18, claim 18 is similar to claim 5 and is therefore rejected in a similar manner.
As per claim 19, claim 19 is similar to claim 1 and is therefore rejected in a similar manner.
As per claim 20, claim 20 is similar to claim 6 and is therefore rejected in a similar manner.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2017/0046503 to Cho et al. in view of U.S. 2017/0011210 to Cheong et al. as applied to claim 1, and further in view of U.S. 2021/0349619 to Crowley et al.
As per claim 12, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cho et al. and Cheong et al. in combination may not further, specifically teach wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--in response to the input for activating the display, after identifying the first activity based on the first data, display, before displaying the screen, another screen for indicating that the first visual object has been added.
Crowley et al. teaches:
--in response to the input for activating the display, after identifying the first activity based on the first data, display, before displaying the screen, another screen for indicating that the first visual object has been added (see: paragraphs [0221] and [0246] where there is displaying of a notification to indicate new visual objects upon waking the device).
One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to in response to the input for activating the display, after identifying the first activity based on the first data, display, before displaying the screen, another screen for indicating that the first visual object has been added as taught by Crowley et al. in the device as taught by Cho et al. and Cheong et al. in combination with the motivation(s) of improving the user’s experience (see: paragraph [0006] of Crowley et al.).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2017/0046503 to Cho et al. in view of U.S. 2017/0011210 to Cheong et al. as applied to claim 1, and further in view of U.S. 2018/0103151 to Erm.
As per claim 13, Cho et al. and Cheong et al. in combination teaches the device of claim 1, see discussion of claim 1. Cho et al. further teaches wherein the first application comprises an application for phone calls, and
--wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, further cause the electronic device to:
--identify a user input on the first visual object which is usable for executing the first application (see: paragraph [0193] where there is an identification of a visual object which is usable by an application).
Cho et al. and Cheong et al. in combination may not further, specifically teach:
1) --identify, based on the user input on the first visual object, a call record stored in the memory, and
2) --display, based on the call record, a notification for suggesting a call destination.
Erm teaches:
1) --identify, based on the user input on the first visual object, a call record stored in the memory, (see: paragraphs [0016] and [0088] where there is identification of a call record) and
2) --display, based on the call record, a notification for suggesting a call destination (see: paragraphs [0016] and [0088] where there is display of a suggestion of a call destination of a call back).
One of ordinary skill before the effective filing date of the claimed invention would have found it obvious to 1) identify, based on the user input on the first visual object, a call record stored in the memory and 2) display, based on the call record, a notification for suggesting a call destination as taught by Erm in the device as taught by Cho et al. and Cheong et al. in combination with the motivation(s) of being a type of application (see: paragraph [0009] of Erm).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven G.S. Sanghera whose telephone number is (571)272-6873. The examiner can normally be reached M-F 7:30-5:00 (alternating Fri).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN G.S. SANGHERA/Primary Examiner, Art Unit 3684