DETAILED ACTION
This final Office action is responsive to Applicant’s amendment filed December 5, 2025. Claims 21, 28, and 34 have been amended. Claims 1-20 are cancelled. Claims 21-40 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed August 14, 2025 have been fully considered but they are not persuasive.
Regarding the rejection under 35 U.S.C. § 101, Applicant asserts that the claim limitations cannot be practically performed in the human mind (page 9 of Applicant’s response). The Examiner points out that many of the claim limitations are directed to extracting text and other contextual information from an image to identify tasks. This is how a human user reads text and image information and makes related decisions. A human user can also use pen and paper as part of the mental process category of judicial exceptions. While the claims integrate additional elements, including a processor, memory, graphical user interface, etc., these additional elements are only presented at a high level and often provide only a general link to technology and/or to a field of use. The claims do not incorporate specific technical details regarding how image analysis is performed, for example. As claimed, aside from a general recitation of the additional elements, a human user can observe/receive information and present analysis and an arrangement of information (including icons representing certain data) on a piece of paper. The claims do not present anything more than a general application of the additional elements to implement the abstract ideas as well as general links to technology and/or a field of use.
Applicant additionally argues that the transformation of “input data” into “representative icons” in the graphical user interface presents a transformation of an article to a different state or thing (pages 9-10 of Applicant’s response). MPEP § 2106.05(c) explains:
An "article" includes a physical object or substance. The physical object or substance must be particular, meaning it can be specifically identified. "Transformation" of an article means that the "article" has changed to a different state or thing. Changing to a different state or thing usually means more than simply using an article or changing the location of an article. A new or different function or use can be evidence that an article has been transformed. Purely mental processes in which thoughts or human based actions are "changed" are not considered an eligible transformation. For data, mere "manipulation of basic mathematical constructs [i.e.,] the paradigmatic ‘abstract idea,’" has not been deemed a transformation. CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 n.2, 99 USPQ2d 1690, 1695 n.2 (Fed. Cir. 2011) (quoting In re Warmerdam, 33 F.3d 1354, 1355, 1360 (Fed. Cir. 1994)).
Simply conveying data in a different format on a display does not present a clear transformation of the data to a different state or thing.
On page 10 of the response, Applicant cites paragraphs 23-24 of Applicant’s Specification to show that the invention is meant to improve data extraction. It is not clear specifically HOW Applicant’s claimed invention improves the actual ability to extract data, much less with any great specificity in terms of how the additional elements themselves accomplish such improvements from a technical perspective.
Applicant submits that the Examiner has not provided any evidence that the claimed combination of additional elements is well-understood, routine, and conventional (page 11 of Applicant’s response). This analysis arises in Step 2B of the Subject Matter Eligibility test, particularly in regard to the operations of the additional elements. The Examiner did not identify any operations of the additional elements themselves or combination thereof that are presented with any level of detail beyond a general application of the additional elements to implement the abstract ideas and no additional elements were otherwise explicitly identified as performing operations that require Berkheimer evidence in Step 2B of the Subject Matter Eligibility test. Applicant also does not specifically point out which specific combination of additional elements and their corresponding operations are not well-understood, routine, and conventional. While the claims have been deemed to be allowable over the prior art, this assessment was largely based on the combination of operations that largely speaks to the details of the identified judicial exceptions (as opposed to any unconventional operations of the additional elements themselves from a technical perspective).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 34-40 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Independent claim 34 has been amended to recite that the text position information of the extract texted is determined “based on one or more pixel regions of image that each correspond to the extracted text.” Beyond generally describing the use of a graphical user interface (which may be interpreted as inherently including pixels in the display), Applicant’s original disclosure does not address any specific image analysis based on actual pixel regions of an image that correspond to the extracted text. Therefore, this limitation is deemed to present new matter. The dependent claims inherit the rejection. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claimed invention is directed to task management without significantly more.
Step
Analysis
1: Statutory Category?
Yes – The claims fall within at least one of the four categories of patent eligible subject matter. Process (claims 28-40), Apparatus (claims 21-27)
Independent claims:
Step
Analysis
2A – Prong 1: Judicial Exception Recited?
Yes – Aside from the additional elements identified in Step 2A – Prong 2 below, the claims recite:
[Claim 21] receiving a user selection to capture tasks from input data;
extracting text from input data and determining position information for the extracted text;
extracting contextual information from the extracted text and the position information from the input data, wherein the contextual information includes a first task, subtask details of the first task, and a second task;
processing of the input data and the position information to identify a first task, subtask details of the first task, a second task, and symbolic information in the subtask details of the first task, wherein the symbolic information at least includes a mark in textual form;
generating, based on the position information and the context information, a first item of a plurality of items as the first task, wherein the symbolic information indicates the first task as being a subtask of the second task;
generating a second item of the plurality of items as the second task based on the position information and the contextual information;
assigning, based on the subtask details of the first task, a relationship between the first task and the second task to at least one of the first task and the second task, wherein the relationship indicates that the first task is a subtask of the second task; and
generating, for display of the first task, the second task, and the assigned relationship between the first task and the second task, an updated display of the first task and the second task, and receiving actuation to move one or more representative icons to a position on the display, thereby defining a tracked execution status of the first task and the second task as extracted from the input data and indicating the assigned relationship between the first task and the second task.
[Claim 28] A method for extracting a task from an image, the method comprising:
receiving, from a user, an image depicting a plurality of tasks and subtasks;
performing, in response to the user selection received, a process to extract text from input data and determine relative position information for the extracted text based on a relative layout of one or more data portions within the input data that each correspond to the extracted text;
determining position information of the extracted text based on a relative position of at least a portion of the image associated with the extracted text;
extracting contextual information from the image, the contextual information comprising a first portion of contextual information and a second portion of contextual information;
generating a first task based on the position of the portion of the extracted text and the first portion of the contextual information extracted from the image, wherein the extracted contextual information includes subtask details of the first task, wherein the subtask details include symbolic information, and wherein the symbolic information, via at least one of a mark or text, indicates the first task as being a subtask of another task;
generating a second task based on the second portion of contextual information extracted from the image;
assigning, based on the subtask details of the first task, a relationship between the first task and the second task to at least one of the first task and the second task, wherein the relationship indicates that the first task is a subtask of the second task, and the subtask of the second task is at least one of a step of the second task or a portion of the second task; and
generating, for display of the plurality of tasks and subtasks, a display of one or more icons, wherein the one or more icons correspond to the plurality of tasks and subtasks, and receiving actuation to move the one or more icons to a position on the display, thereby defining a tracked execution status of the first task and the second task as extracted from the image and indicating the assigned relationship between the first task and the second task.
[Claim 34] A method for extracting task information from an image, the method comprising:
receiving an image depicting a plurality of tasks and subtasks;
extracting text from the image, the extracted text comprising one or more portions;
determining text position information of the extracted text based on one or more regions of image that each correspond to the extracted text;
extracting contextual information from at least one of the image or the extracted text, the contextual information comprising a first portion of contextual information, a second portion of contextual information, and text positioning information for the portion of the text extracted from the image;
generating a first task based on the portion of the text extracted from the image and the text positioning information for the portion of the text, wherein the first portion of the extracted contextual information includes subtask details of the first task, wherein the subtask details include symbolic information, and wherein the symbolic information, via at least one of a mark or text, indicates the first task as being a subtask of another task;
generating a second task based on the second portion of the contextual information extracted from the image; and
assigning, based on the subtask details of the first task, a relationship between the first task and the second task to at least one of the first task and the second task, wherein the relationship indicates that the first task is a subtask of the second task, and the subtask of the second task is at least one of a step of the second task or a portion of the second task; and
generating, for display of the plurality of tasks and subtasks, a display of one or more icons, wherein the one or more icons correspond to the plurality of tasks and subtasks, and receiving actuation to move the one or more icons to a position on the display, thereby defining a tracked execution status of the first task and the second task as extracted from the image and indicating the assigned relationship between the first task and the second task.
Aside from the additional elements, the aforementioned claim details exemplify the abstract idea(s) of a mental process (since the details include concepts performed in the human mind, including an observation, evaluation, judgment, and/or opinion). As explained in MPEP § 2106(a)(2)(C)(III), “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, ‘methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’’ 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).” The limitations reproduced above, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting the additional elements identified in Step 2A – Prong 2 below, nothing in the claim elements precludes the steps from practically being performed in the mind and/or by a human using a pen and paper. For example, but for the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the respectively recited steps/functions of the claims, as drafted and set forth above, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and/or with the use of pen and paper. For example, by reading words, which are defined by relational positions of letters, and sentences, which are defined by relational positions of words, a human user can evaluate a received image (e.g., read text and/or review an image corresponding to a frame of a video) and extract text and context in light of the position information of the letters, words, etc. A human user can also interpret the meaning of symbolic information, a mark, text, etc. The human user can additionally make decisions based on the read information, including decisions related to assigning tasks and subtasks. A human user can also draw out tasks and subtasks on a piece of paper (e.g., cause the task and subtasks to be displayed) and store the drawn out information in a storage area (e.g., in a hanging file of a file cabinet) and arrange representative icons (e.g., drawn out by hand) in a specific manner. The human user may also hear and process audio data and make decisions related to data that was heard (including decisions related to the assignment of tasks). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind (and/or with pen and paper) but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Aside from the additional elements, the aforementioned claim details exemplify a method of organizing human activity (since the details include examples of commercial or legal interactions, including advertising, marketing or sales activities or behaviors, and/or business relations and managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions). More specifically, the evaluated process is related to task management, which (under its broadest reasonable interpretation) is an example of managing personal behavior or relationships or interactions between people (i.e., organizing human activity); therefore, aside from the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the limitations identified in the more detailed claim listing above encompass the abstract idea of organizing human activity. Additionally, a user can receive actuation to move icons, which is an example of instructing user behavior (i.e., organizing human activity).
Extracting certain types of information is an example of filtering content. MPEP § 2106.04(a)(2)(II)(C) cites the following as an example of managing personal behavior, i.e., organizing human activity: “filtering content, BASCOM Global Internet v. AT&T Mobility, LLC, 827 F.3d 1341, 1345-46, 119 USPQ2d 1236, 1239 (Fed. Cir. 2016) (finding that filtering content was an abstract idea under step 2A, but reversing an invalidity judgment of ineligibility due to an inadequate step 2B analysis).” MPEP § 2106.04(a)(2)(III)(D) cites the following as an example of a mental process: “An application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356.”
2A – Prong 2: Integrated into a Practical Application?
No – The judicial exception(s) is/are not integrated into a practical application.
Claim 21 includes a system comprising:
at least one processor; and
memory storing instructions that, when executed by the at least one processor, causes the system to perform the recited set of operations.
The set of operations includes at least some machine-based operations, including the following:
performing, in response to the user selection received via the graphical user interface, a machine-based recognition process to extract text from input data and determine relative position information for the extracted text;
performing machine-based natural language processing of the input data and the position information.
Claim 21 also uses a graphical user interface to receive input data and present data for display at a high level of generality.
Claim 28 recites receiving, at a processing device, an image depicting a plurality of tasks and subtasks. Claim 28 also recites that a graphical user interface is generated to display information and configured to receive actuation to move icons around.
Claim 34 recites receiving, at a processing device of a user, an image depicting a plurality of tasks and subtasks. Claim 34 recites that text is extracted from the image using optical character recognition. Additionally, claim 34 recites determining text position information of the extracted text based on one or more pixel regions of image that each correspond to the extracted text. Claim 34 also recites that a graphical user interface is generated to display information and configured to receive actuation to move icons around.
The use of optical character recognition is explicitly recited in independent claim 34, yet the recitation is very high level and presents a general link to technology and a field of use. Similarly, the fact that pixel regions are used to determine text position information also presents a general link to technology and to a field of use.
It is also noted that what the graphical user interface is configured to perform does not impart significant patentable weight on the method claims since method claims are defined and limited by positively recited steps. A configured operation is not necessarily actively performed within the scope of a method claim.
The claims as a whole merely describe how to generally “apply” the abstract idea(s) in a computer environment. The claimed processing elements are recited at a high level of generality and are merely invoked as a tool to perform the abstract idea(s). Simply implementing the abstract idea(s) on a general-purpose processor is not a practical application of the abstract idea(s); Applicant’s specification discloses that the invention may be implemented using general-purpose processing elements and other generic components (Spec: ¶¶ 36, 40, 43, 48-61).
The use of a processor/processing elements (e.g., as recited in all of the claims) facilitates generic processor operations. The use of a memory or machine-readable media with executable instructions facilitates generic processor operations.
The additional elements are recited at a high-level of generality (i.e., as generic processing elements performing generic computer functions) such that the incorporation of the additional processing elements amounts to no more than mere instructions to apply the judicial exception(s) using generic computer components. There is no indication in the Specification that the steps/functions of the claims require any inventive programming or necessitate any specialized or other inventive computer components (i.e., the steps/functions of the claims may be implemented using capabilities of general-purpose computer components). Accordingly, the additional elements do not integrate the abstract ideas into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea(s).
The processing components presented in the claims simply utilize the capabilities of a general-purpose computer and are, thus, merely tools to implement the abstract idea(s). As seen in MPEP § 2106.05(a)(I) and § 2106.05(f)(2), the court found that accelerating a process when the increased speed solely comes from the capabilities of a general-purpose computer is not sufficient to show an improvement in computer-functionality and it amounts to a mere invocation of computers or machinery as a tool to perform an existing process (see FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)).
There is no transformation or reduction of a particular article to a different state or thing recited in the claims.
Additionally, even when considering the operations of the additional elements as an ordered combination, the ordered combination does not amount to significantly more than what is present in the claims when each operation is considered separately.
2B: Claim(s) Provide(s) an Inventive Concept?
No – The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s). As discussed above with respect to integration of the abstract idea(s) into a practical application, the use of the additional elements to perform the steps identified in Step 2A – Prong 1 above amounts to no more than mere instructions to apply the exceptions using a generic computer component(s). Mere instructions to apply an exception using a generic computer component(s) cannot provide an inventive concept. The claims are not patent eligible.
Dependent claims:
Step
Analysis
2A – Prong 1: Judicial Exception Recited?
Yes – Aside from the additional elements identified in Step 2A – Prong 2 below, the claims recite:
[Claim 22] wherein the input data is an image depicting a plurality of tasks and subtasks, and the set of operations further comprises:
extracting a task depicted in the image;
determining a characteristic from contextual information associated with the task; and
identifying a user based on the characteristic.
[Claim 23] wherein the set of operations further comprises:
receiving an indication of a modification to one or more tasks of the plurality of tasks; and
storing the modification to the one or more tasks of the plurality of tasks.
[Claim 24] wherein the input data includes audio data, and wherein the set of operations further comprises:
extracting the contextual information from the audio data;
based on the extracted contextual information from the audio data, identifying the at least one item of the plurality of items associated with the contextual information; and
assigning the at least one item to the user based on the extracted contextual information from the audio data.
[Claim 25] wherein the plurality of items includes one or more tasks assigned to a user.
[Claim 26] wherein the plurality of items includes a plurality of voice memos.
[Claim 27] wherein the set of operations further comprises:
acquiring an image of an environment of a user;
determining if the acquired image includes one or more tasks; and
extracting the one or more tasks for the user based on the acquired image.
[Claim 29] receiving an indication of a modification to one or more tasks of the plurality of tasks; and
storing the modification to the one or more tasks of the plurality of tasks.
[Claim 30] receiving audio data;
extracting the contextual information from the audio data;
based on the extracted contextual information from the audio data, identifying the task; and
assigning the task to the user based on the extracted contextual information from the audio data.
[Claim 31] wherein the contextual information includes text positioning information relative to other text.
[Claim 32] wherein the image is an image of an environment of the user, the method further comprising:
extracting a plurality of tasks for the user based on the received image.
[Claim 33] extracting text from the image, the text having been made accessible via an optical character recognition process; and
obtaining text positioning information from the image.
[Claim 35] receiving an indication of a modification to one or more tasks of the plurality of tasks; and
storing the modification to the one or more tasks of the plurality of tasks.
[Claim 36] assigning the first task to a user based on the extracted contextual information.
[Claim 37] identifying a user associated with the contextual information, wherein the user is assigned to the first task.
[Claim 38] determining if the image includes one or more tasks;
extracting first and second delineators from the image, the first and second delineators indicating that text associated with the first and second delineators is a task and/or subtask;
generating the first task based on the first delineator; and
generating the subtask based on the second delineator.
[Claim 39] wherein the text positioning information includes identifying a character delineating a task from other text information in the image.
[Claim 40] wherein the image is a first frame of a video, and the text positioning information is based on a placement of text over time.
The dependent claims further define the abstract ideas identified in regard to the independent claims above.
Aside from the additional elements, the aforementioned claim details exemplify the abstract idea(s) of a mental process (since the details include concepts performed in the human mind, including an observation, evaluation, judgment, and/or opinion). As explained in MPEP § 2106(a)(2)(C)(III), “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, ‘methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’’ 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).” The limitations reproduced above, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting the additional elements identified in Step 2A – Prong 2 below, nothing in the claim elements precludes the steps from practically being performed in the mind and/or by a human using a pen and paper. For example, but for the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the respectively recited steps/functions of the claims, as drafted and set forth above, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and/or with the use of pen and paper. For example, by reading words, which are defined by relational positions of letters, and sentences, which are defined by relational positions of words, a human user can evaluate a received image (e.g., read text and/or review an image corresponding to a frame of a video) and extract text and context in light of the position information of the letters, words, etc. A human user can also interpret the meaning of symbolic information, a mark, text, etc. The human user can additionally make decisions based on the read information, including decisions related to assigning tasks and subtasks. A human user can also draw out tasks and subtasks on a piece of paper (e.g., cause the task and subtasks to be displayed) and store the drawn out information in a storage area (e.g., in a hanging file of a file cabinet) and arrange representative icons (e.g., drawn out by hand) in a specific manner. The human user may also hear and process audio data and make decisions related to data that was heard (including decisions related to the assignment of tasks). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind (and/or with pen and paper) but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Aside from the additional elements, the aforementioned claim details exemplify a method of organizing human activity (since the details include examples of commercial or legal interactions, including advertising, marketing or sales activities or behaviors, and/or business relations and managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions). More specifically, the evaluated process is related to task management, which (under its broadest reasonable interpretation) is an example of managing personal behavior or relationships or interactions between people (i.e., organizing human activity); therefore, aside from the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the limitations identified in the more detailed claim listing above encompass the abstract idea of organizing human activity. Additionally, a user can receive actuation to move icons, which is an example of instructing user behavior (i.e., organizing human activity).
Extracting certain types of information is an example of filtering content. MPEP § 2106.04(a)(2)(II)(C) cites the following as an example of managing personal behavior, i.e., organizing human activity: “filtering content, BASCOM Global Internet v. AT&T Mobility, LLC, 827 F.3d 1341, 1345-46, 119 USPQ2d 1236, 1239 (Fed. Cir. 2016) (finding that filtering content was an abstract idea under step 2A, but reversing an invalidity judgment of ineligibility due to an inadequate step 2B analysis).” MPEP § 2106.04(a)(2)(III)(D) cites the following as an example of a mental process: “An application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356.”
2A – Prong 2: Integrated into a Practical Application?
No – The judicial exception(s) is/are not integrated into a practical application.
The dependent claims include the additional elements of the respective independent claim from which each depends.
Claim 21 includes a system comprising:
at least one processor; and
memory storing instructions that, when executed by the at least one processor, causes the system to perform the recited set of operations.
The set of operations includes at least some machine-based operations, including the following:
performing, in response to the user selection received via the graphical user interface, a machine-based recognition process to extract text from input data and determine relative position information for the extracted text;
performing machine-based natural language processing of the input data and the position information.
Claim 21 also uses a graphical user interface to receive input data and present data for display at a high level of generality.
Claim 23 recites receiving an indication via the user interface and storing the modification to the one or more tasks of the plurality of tasks in a storage area.
Claim 28 recites receiving, at a processing device, an image depicting a plurality of tasks and subtasks. Claim 28 also recites that a graphical user interface is generated to display information and configured to receive actuation to move icons around.
Claim 29 recites receiving, via the user interface, an indication of a modification to one or more tasks of the plurality of tasks; and
storing the modification to the one or more tasks of the plurality of tasks in a storage area.
Claim 30 recites receiving, at the processing device, audio data.
Claim 33 recites extracting text from the image, the text having been made accessible via an optical character recognition process.
Claim 34 recites receiving, at a processing device of a user, an image depicting a plurality of tasks and subtasks. Claim 34 recites that text is extracted from the image using optical character recognition. Additionally, claim 34 recites determining text position information of the extracted text based on one or more pixel regions of image that each correspond to the extracted text. Claim 34 also recites that a graphical user interface is generated to display information and configured to receive actuation to move icons around.
Claim 35 recites receiving, via the user interface, an indication of a modification to one or more tasks of the plurality of tasks; and
storing the modification to the one or more tasks of the plurality of tasks in a storage area.
The recitation of “the text having been made accessible via an optical character recognition process” in claim 33 presents a limitation that is outside the scope of the method claim since the act of processing the text using an optical character recognition process is not presented as a positively recited step of the method. Even if more positively recited, a simple recitation of use of an optical character recognition process would merely be an example of a general link to technology. For example, the use of optical character recognition is more explicitly recited in independent claim 34, yet the recitation is very high level and presents a general link to technology and a field of use. Similarly, the fact that pixel regions are used to determine text position information also presents a general link to technology and to a field of use.
It is also noted that what the graphical user interface is configured to perform does not impart significant patentable weight on the method claims since method claims are defined and limited by positively recited steps. A configured operation is not necessarily actively performed within the scope of a method claim.
The claims as a whole merely describe how to generally “apply” the abstract idea(s) in a computer environment. The claimed processing elements are recited at a high level of generality and are merely invoked as a tool to perform the abstract idea(s). Simply implementing the abstract idea(s) on a general-purpose processor is not a practical application of the abstract idea(s); Applicant’s specification discloses that the invention may be implemented using general-purpose processing elements and other generic components (Spec: ¶¶ 36, 40, 43, 48-61).
The use of a processor/processing elements (e.g., as recited in all of the claims) facilitates generic processor operations. The use of a memory or machine-readable media with executable instructions facilitates generic processor operations.
The additional elements are recited at a high-level of generality (i.e., as generic processing elements performing generic computer functions) such that the incorporation of the additional processing elements amounts to no more than mere instructions to apply the judicial exception(s) using generic computer components. There is no indication in the Specification that the steps/functions of the claims require any inventive programming or necessitate any specialized or other inventive computer components (i.e., the steps/functions of the claims may be implemented using capabilities of general-purpose computer components). Accordingly, the additional elements do not integrate the abstract ideas into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea(s).
The processing components presented in the claims simply utilize the capabilities of a general-purpose computer and are, thus, merely tools to implement the abstract idea(s). As seen in MPEP § 2106.05(a)(I) and § 2106.05(f)(2), the court found that accelerating a process when the increased speed solely comes from the capabilities of a general-purpose computer is not sufficient to show an improvement in computer-functionality and it amounts to a mere invocation of computers or machinery as a tool to perform an existing process (see FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)).
There is no transformation or reduction of a particular article to a different state or thing recited in the claims.
Additionally, even when considering the operations of the additional elements as an ordered combination, the ordered combination does not amount to significantly more than what is present in the claims when each operation is considered separately.
2B: Claim(s) Provide(s) an Inventive Concept?
No – The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s). As discussed above with respect to integration of the abstract idea(s) into a practical application, the use of the additional elements to perform the steps identified in Step 2A – Prong 1 above amounts to no more than mere instructions to apply the exceptions using a generic computer component(s). Mere instructions to apply an exception using a generic computer component(s) cannot provide an inventive concept. The claims are not patent eligible.
Allowable Subject Matter
Claims 21-40 are allowed over the prior art. The claims are rejected under 35 U.S.C. § 101.
The following is a statement of reasons for the indication of allowable subject matter:
Blue et al. (US 2019/0130227) in view of Sridhara et al. (US 2020/0342369) in view of Bigelow et al. (US 2009/0234721) in view of Pope et al. (US 2012/0116835) in view of Rogut et al. (US 2014/0149172) most closely address the claimed limitations, as explained in the art rejections presented in the Office action dated September 13, 2024. Blue further discloses a user interface that allows a user to view tasks and move them around (Blue: ¶ 52). Nevertheless, given the specific coordinated integration of operations in the claims, the Examiner does not find that one of ordinary skill in the art, prior to Applicant’s effective filing date, would have found it obvious and would have been motivated to combine the teachings and suggestions of all of these references to create all of the operations of the invention, especially as specifically ordered and integrated according to the details of independent claims 21, 28, and 34; therefore, claims 21-40 are deemed to be allowable over the prior art of record.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUSANNA M DIAZ whose telephone number is (571)272-6733. The examiner can normally be reached M-F, 8 am-4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571) 270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUSANNA M. DIAZ/
Primary Examiner
Art Unit 3625A