Prosecution Insights
Last updated: April 19, 2026
Application No. 18/970,116

SYSTEM AND METHOD OF PROVIDING TO-DO LIST OF USER

Non-Final OA §101§102§103
Filed
Dec 05, 2024
Examiner
MANSFIELD, THOMAS L
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 5m
To Grant
84%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
294 granted / 584 resolved
-1.7% vs TC avg
Strong +34% interview lift
Without
With
+34.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
45 currently pending
Career history
629
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
24.1%
-15.9% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 584 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims This First Office action is in reply to the application filed on 05 December 2024. Claims 1-18 are currently pending and have been examined. The Information Disclosure Statement filed 05 December 2024, 24 March 2025 have been considered by the Examiner. Signed copies are enclosed with this Office Action. Priority This application claims priority from application 14/940,676 filed 13 Nov. 2014 now U.S. Patent No. 10,657,501 claiming foreign priority from Korean patent application number 10-2014-0167811 filed 27 Nov. 2014 as required by 37 CFR 1.55. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, natural phenomenon, or an abstract idea) because the claimed invention is directed to a judicial exception (i.e., a law of nature, natural phenomenon, or an abstract idea) without significantly more. The claims as a whole recite certain grouping of an abstract idea and are analyzed in the following step process: Step 1: Claims 1-18 are each focused to a statutory category of invention, namely “device; server; method” sets. Step 2A: Prong One: Claims 1-18 recite limitations that set forth the abstract ideas, namely, the claims as a whole recite the claimed invention as directed to an abstract idea without significantly more. The claims recite steps from representative independent Claim 16: “receiving a telephone conversation of a user from a device of the user; converting the telephone conversation into text; determining a task to be performed by the user of the device by applying the converted telephone conversation to natural language processing model, wherein the task includes a date when the task is to be performed; and transmitting the determined task to the device of the user” As detailed in the MPEP 2106 and commensurate to the two-part subject matter eligibility framework decision in the Federal court decision in Alice Corp. Pty. Ltd. V. CLS Bank International et al., (Alice), 2019 revised patent subject matter eligibility guidance (2019 PEG) and the October 2019 Update: Subject Matter Eligibility (“October 2019 Update), and the new “July 2024 Guidance Update on Patent Subject Matter Eligibility Examples, including on Artificial Intelligence”, the 2019 PEG explains that the abstract idea exception includes the following groupings of subject matter. The 35 U.S.C. 101 Step 2A, Prong One analysis focuses on whether a claim recites a judicial exception by evaluating if it falls into one of three specific groupings: mathematical concepts, mental processes, or certain methods of organizing human activity. Based on the provided steps above in bolded, the analysis for Step 2A Prong One is as follows: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations. "applying the converted telephone conversation to a natural language processing model". Natural language processing models rely heavily on mathematical algorithms (e.g., machine learning, probability). Certain methods of organizing human activity –managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). "determining a task to be performed by the user... wherein the task includes a date". This category includes task scheduling/management. "converting to text" and "determining a task" are processes that can be performed in the human mind (or with pen and paper), and are often considered abstract, particularly if they do not provide a specific technical improvement to the computer itself. Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion). “converting the telephone conversation into text" and "determining a task to be performed" are processes that can be performed in the human mind (or with pen and paper), and are often considered abstract. These steps map to conceptual, human-like mental processes (evaluating/judging a conversation to determine a task) that are being performed by a person rather than a computer. See MPEP § 2106.04(a) III C. Hence, the claims are ineligible under Step 2A Prong one. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. Prong Two: Claims 1-18: With regard to this step of the analysis (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Independent Claims 1-18 recite additional elements directed to “device; communication circuit; memory storing one or more computer programs; one or more processors; server” (e.g., see Applicants’ published Specification ¶’s 64-84). Therefore, the claims contain computer components that are cited at a high level of generality and are merely invoked as a tool to perform the abstract idea. Simply implementing an abstract idea on a computer is not a practical application of the abstract idea. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible. See MPEP § 2106.05(f) (h). Step 2B: As explained in MPEP § 2106.05, Claims 1-18 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea nor recites additional elements that integrate the judicial exception into a practical application. The additional elements of “device; communication circuit; memory storing one or more computer programs; one or more processors; server”, etc. are generically-recited computer-related elements that amount to a mere instruction to “apply it” (the abstract idea) on the computer-related elements (see MPEP § 2106.05 (f) – Mere Instructions to Apply an Exception). These additional elements in the claims are recited at a high level of generality and are merely limiting the field of use of the judicial exception (see MPEP §2106.05 (h) – Field of Use and Technological Environment). There is no indication that the combination of elements improves the function of a computer or improves any other technology. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible. The Examiner interprets that the steps of the claimed invention both individually and as an ordered combination result in Mere Instructions to Apply a Judicial Exception (see MPEP §2106.05 (f)). These claims recite only the idea of a solution or outcome with no restriction on how the result is accomplished and no description of the mechanism used for accomplishing the result. Here, the claims utilize a computer or other machinery (e.g., see Applicants’ published Specification ¶’s 64-84) regarding using existing computer processors as well as program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored. “device 1000” in its ordinary capacity for performing tasks (e.g., to receive, analyze, transmit and display data) and/or use computer components after the fact to an abstract idea (e.g., a fundamental economic practice and certain methods of organization human activities) and does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016)). Software implementations are accomplished with standard programming techniques with logic to perform connection steps, processing steps, comparison steps and decisions steps. These claims are directed to being a commonplace business method being applied on a general-purpose computer (see Alice Corp. Pty, Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357, 110 USPQ2d 1976, 1983 (2014)); Versata Dev. Group, Inc., v. SAP Am., Inc., 793 D.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)) and require the use of software such as via a server to tailor information and provide it to the user on a generic computer. Based on all these, Examiner finds that when viewed either individually or in combination, these additional claim element(s) do not provide meaningful limitation(s) that raise to the high standards of eligibility to transform the abstract idea(s) into a patent eligible application of the abstract idea(s) such that the claim(s) amounts to significantly more than the abstract idea(s) itself. Accordingly, Claims 1-18 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e. abstract idea exception) without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Isherwood et al. (Isherwood) (US 2015/0006222). With regard to Claims 1, 7, 8, 10, 16, 17, Isherwood teaches a device/method/server for providing a task (A mobile device is often the primary computing device used by individuals to manage their tasks), the device comprising: a display; a communication circuit; memory storing one or more computer programs; and one or more processors communicatively coupled to the display, the communication circuit, and the memory, wherein the one or more computer programs include instructions which, when executed by the one or more processors individually or collectively (System 510) (see at least paragraphs 17-13, 35-41) cause the device to: record/receive a telephone conversation while a user of the device is on a call (FIG. 1 illustrates a block diagram of a context based task scheduler 100 according to an embodiment. The key phrase interface 104 may be utilized by a user to add key phrases to the key phrase database 114. A phrase may include a set of one or more words in a particular order. The key phrase database 114 may include phrases to be identified from a phone conversation. The phone call state listener (PCSL) 106 may detect state changes to a phone call such as "idle," "call in progress," "ringing," etc. The voice capturer 108 may record phone conversations or portions of phone conversations. The recorded portions of phone conversations may be converted to text corresponding to the spoken words from the recorded portions and the text analyzer 110 may analyze this text to determine the attributes of a task to be scheduled), transmit, through the communication circuit, the telephone conversation (the conversation (or user's portion of the conversation) may be temporarily recorded. That is, data files resulting from the recording may be discarded after the respective task(s) are scheduled. The recorded audio may be transmitted by the voice capturer 208 to a voice-to-text converter 220. In an embodiment, the voice capturer 208 may directly stream the voice audio to the voice-to-text converter 220 in real-time) to a server (The existing internal systems 530 may include a server and may provide business data, geographical data, calendar data, and/or other data necessary for the scheduling of tasks. The external systems 550 may include a server and may be maintained by a third party, such as an information service provider, and may include business data, geographical data, calendar data, and/or other data necessary for the scheduling of tasks, that may be updated by the third party on a periodic basis. The system 510 may interact with these external systems to obtain updates through a firewall system 540 separating the internal systems from the external systems) (see at least paragraphs 7-15, 24, 35), convert the telephone conversation into text (The phone call state listener 206 may be initialized to detect changes to the user's phone call state. If the phone call state listener 206 detects that a phone call is in progress (either made or received by the user), the phone call state listener 206 may send a signal to trigger recording of the phone call by the voice capturer 208. In response, the voice capturer 208 may record the phone conversation between the user and the client. In an embodiment, only the user's side of the phone conversation may be recorded by, for example, recording the voice transmitted through the microphone or in-line device of the user's phone. In an embodiment, the conversation (or user's portion of the conversation) may be temporarily recorded. That is, data files resulting from the recording may be discarded after the respective task(s) are scheduled. The recorded audio may be transmitted by the voice capturer 208 to a voice-to-text converter 220. In an embodiment, the voice capturer 208 may directly stream the voice audio to the voice-to-text converter 220 in real-time. In an embodiment, the voice capturer 208 may pre-process the recorded audio to remove any distortions in the audio by, for example, filtering the audio. The voice to text conversion may be performed locally (on phone device), or it may be performed remotely (external to the phone device, e.g., using a cloud voice-to-text processing service)) (see at least paragraphs 24-26), determine a task/subtask to be performed by the user of the device (prior to assigning the task attribute values, phrases matching phrases from a word list may be removed from the text results. In an embodiment, scheduling rules may be retrieved from a database. Available time slots in the calendar may be determined. Each scheduling rule from the scheduling rules may be applied to each available time slot from the available time slots. A respective rating of each available time slot may be determined based on the application of each scheduling rule. The task may be scheduled in an available time slot with the highest respective rating. In an embodiment, the available time slots may be determined based on the one or more task attribute values. In an embodiment, only portions of the phone conversation spoken by the phone user may be captured. In an embodiment, in response to scheduling the task, a notification with details associated with the task may be sent to one or more participants of the phone conversation) by applying the converted telephone conversation to natural language processing model (The recorded portions of phone conversations may be converted to text by a voice to text converter (not shown). The text analyzer 110 may analyze the converted text to determine the attributes of tasks to be scheduled. When analyzing the converted text, the text analyzer 110 may ignore any words or phrases from the converted text found in the word list 116. In an embodiment, the text analyzer 110 may, during a first pass, remove any words or phrases from the converted text matching the words or phrases in word list 116. The text analyzer may then determine task attribute values on a second pass and generate tasks from the task attribute values. The task scheduler 112 may then schedule the generated tasks in the user's calendar using the task attribute values determined by the text analyzer 110 and the scheduling rules in database 118) (see at least paragraphs 8-13, 24), receive, through the communication circuit, a task to be performed by the user of the device from the server (If a set of results is returned by the voice-to-text converter 220, each result may be analyzed by the text analyzer 210 to determine task attribute values and assign those values to respective task attributes. FIG. 3 illustrates a process diagram 300 to assign task attribute values to respective task attributes according to an embodiment. At step 302, a set of results transmitted from the voice-to-text converter 220 may be received. For each result, words or phrases from the word list 216 found in the result may be removed 306. The result may be parsed to find phrases matching key phrases defined in the key phrase database 214. If one or more key phrases are found (308), for each key phrase, the corresponding mapped task attribute(s) may be retrieved from the key phrase database 214 (310). Then, at step 312, the text following the key phrase in the result may be assigned as value(s) for the retrieved task attribute(s); the creation of the task 211 may trigger the operation of the task scheduler 212. The task scheduler 212 may utilize the task 211, the user's calendar 230, information relating to the user's context, and/or the rules in the scheduling rules database 218 to schedule the task 211. The task scheduler 212 may determine an optimal time and date to schedule task 211. FIG. 4 illustrates a process 400 to determine an optimal time and date to schedule a task according to an embodiment. At step 402, scheduling rules may be retrieved from the scheduling rules database 218. The method 400 may determine available time slots in the user's calendar 230 (404). The available time slots may be derived from unscheduled times/dates in the user's calendar, the time required to perform the currently analyzed task as determined by, for example, the task's attributes, and/or proposed times for the task as determined by, for example, the task's attributes) (see at least paragraphs 26, 30), wherein the task includes a date (the creation of the task 211 may trigger the operation of the task scheduler 212. The task scheduler 212 may utilize the task 211, the user's calendar 230, information relating to the user's context, and/or the rules in the scheduling rules database 218 to schedule the task 211. The task scheduler 212 may determine an optimal time and date to schedule task 211. FIG. 4 illustrates a process 400 to determine an optimal time and date to schedule a task according to an embodiment. At step 402, scheduling rules may be retrieved from the scheduling rules database 218. The method 400 may determine available time slots in the user's calendar 230 (404). The available time slots may be derived from unscheduled times/dates in the user's calendar, the time required to perform the currently analyzed task as determined by, for example, the task's attributes, and/or proposed times for the task as determined by, for example, the task's attributes) when the task is to be performed, the task and the date is inferenced from the telephone conversation (when the phone conversation ends, a phone state change to, for example, "idle," may be detected by the phone call state listener 206 and in response, the voice capturer may be stopped. If all the values for the task attributes have been obtained, a task 211 may be generated. This task may include all the task attribute values and may be utilized as input by the task scheduler 212 to create a corresponding task in the user's calendar 230) (see at least paragraphs 12, 28-31), display, through the display (display device 515), a user interface for verifying the task, the user interface including the task and the date when the task is to be performed (The voice capturer 108 may record phone conversations or portions of phone conversations. The recorded portions of phone conversations may be converted to text corresponding to the spoken words from the recorded portions and the text analyzer 110 may analyze this text to determine the attributes of a task to be scheduled. When analyzing the converted text, the text analyzer 110 may ignore any words or phrases from the converted text found in the word list 116. The task scheduler 112 may schedule a task in the user's calendar using the task attributes determined by the text analyzer 110 and the scheduling rules in database 118. The user may specify the rules in scheduling rules database 118 via a scheduling rules interface 102) (see at least paragraphs 10-13, 35), and based on a user input regarding the user interface, determine the task as a to-do item (The user, for example, a service provider such as a plumber, may wish to capture a number of attributes that relate to the tasks (jobs) that he/she performs. For example, a plumber may wish to capture attributes such as the client's address, the type of work to be completed, the estimated cost, and the estimated job duration. The user may specify the task attributes that the user wishes to capture via a configuration file 201 (task template) and/or the key phrase interface 204. For every task attribute specified, associated key phrases may be provided via the key phrase interface 204. The specified associated key phrases may be stored in the key phrase database 214. The user may use the associated key phrases in a phone conversation with the client to capture task attribute values as previously described. Any suitable method to provide these key phrases may be used. For example, the user may provide the key phrases as text input or voice input. A suitable user interface such as a computer input device and/or microphone to capture the voice input may be utilized for this purpose) (see at least paragraphs 21-25); transmit the determined task to the device of the user (the converted text results may be transmitted by the voice-to-text converter 220 to the text analyzer 210. In an embodiment, the voice-to-text converter 220 may recognize a spoken word (or phrase) as one or more possible text results based on, for example, similar sounding words. Therefore, the voice-to-text converter 220 may provide more than one possible version of converted text. In an embodiment, multiple versions of a text may be ranked in terms of perceived accuracy (e.g., the first version may be considered by the voice-to-text converter 220 as more accurate than the second, and so on); Once all scheduling rules are processed, the time slot with the highest rating may be determined and the task may be scheduled in the highest rated time slot 412) (see at least paragraphs 24-30); With regard to Claims 2, 11, Macbeth teaches: receive a sub task as a detailed operation for performing the task from the server, wherein the sub task is inferenced based on the telephone conversation and the task, and display the sub task as the detailed operation for performing the task (see at least paragraphs 13,14). With regard to Claims 3, 12, Macbeth teaches: display an application for performing the sub task, and based on a user input regarding the application, perform the sub task by executing the application (see at least paragraphs 13,14, 35). With regard to Claims 4, 9, 13, 18, Isherwood teaches wherein based on a context of the telephone conversation, an authentic level of the task is determined, and wherein when the authentic level of the task is equal to or higher than a pre-set threshold value (if a predetermined threshold rating is reached for a time slot while iterating through the scheduling rules, the method 400 may stop executing steps 406-408 and immediately schedule the task in the time slot with the rating at (or above) the predetermined threshold), (see at least paragraphs 31-35). With regard to Claims 5, 14, Macbeth teaches: receive a user input of revising the task on the user interface for verifying the task, and determine the revised task as the to-do item (see at least paragraphs 24, 29). With regard to Claims 6, 15, Macbeth teaches: in response to receiving a call signal, display summarized information on recent telephone conversation between the user and a caller on a user interface for an incoming call (see at least paragraphs 11, 24). Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: Robarts et al. (US 6,842,877) Abbott et al. (US 7,107,539) Gummagatta et al. (KR-20120094449-A) Macbeth et al. (US 2007/0300225 Karlson, Amy K., et al. "Mobile taskflow in context: a screenshot study of smartphone usage." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2010 Siewiorek, Daniel P., et al. "SenSay: A Context-Aware Mobile Phone." ISWC. Vol. 3. 2003 Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS L MANSFIELD whose telephone number is (571)270-1904. The examiner can normally be reached M-Thurs, alt. Fri. (9-6). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. THOMAS L. MANSFIELD Examiner Art Unit 3623 /THOMAS L MANSFIELD/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Dec 05, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12412135
PEAK CONSUMPTION MANAGEMENT FOR RESOURCE DISTRIBUTION SYSTEM
2y 5m to grant Granted Sep 09, 2025
Patent 12299643
ACCEPTANCE-BASED MEETING INSIGHTS AND ACTION RECOMMENDATIONS
2y 5m to grant Granted May 13, 2025
Patent 12299702
SYSTEMS AND METHODS FOR COMPUTER ANALYTICS OF ASSOCIATIONS BETWEEN ONLINE AND OFFLINE PURCHASE EVENTS
2y 5m to grant Granted May 13, 2025
Patent 12301683
SYSTEMS AND METHODS FOR UPDATING RECORD OBJECTS OF A SYSTEM OF RECORD
2y 5m to grant Granted May 13, 2025
Patent 12226901
Smart Change Evaluator for Robotics Automation
2y 5m to grant Granted Feb 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
84%
With Interview (+34.0%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 584 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month