Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of Claims
This action is a first action on the merits in response to the application filed on 06/05/2024.
Claims 1 – 19 are currently pending and have been examined in this application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 18 and 19 are rejected under 35 U.S.C. 112, (b)/second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
Claim 18 recites, “A task flow analysis model that is stored in a computer-readable recording medium, the task flow analysis model comprising…” It is unclear as to what statutory category is being claimed. A model is not considered a statutory category. For purposes of examination, Examiner interprets this as a method. Claim 19 is rejected under the same rationale.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites:
obtaining operation history data according to use of the electronic device of the user the operation history data including a plurality of log data and a plurality of image data;
obtaining reference task data using the operation history data - the reference task data including at least one sequence data and the sequence data being created using at least some of the plurality of log data and at least some of the plurality of image data; and
obtaining forecast task data using the reference task data and a task forecast module, wherein the task forecast module includes an encoder that outputs intermediate data by receiving the reference task data, and a decoder that obtains the intermediate data from the encoder and outputs data using the intermediate data, wherein the encoder includes neural networks of the number corresponding to the number of sequence data included in the reference task data.
The limitations under its broadest reasonable interpretation covers Mental Processes related to observation and evaluation of data, but for the recitation of generic computer components (e.g. electronic device). For example, obtaining operation history data, obtaining reference task data, obtaining forecast task data and forecasting using neural networks involves collecting and analyzing data (e.g. observation and evaluation). Accordingly, the claim recites an abstract idea of Mental Processes.
Claim 18 recites:
a work history collection module that obtains work history data corresponding to work performed through the electronic device - the work history data including a plurality of log data and a plurality of image data;
a preprocessing module that obtains reference task data by processing the work history data - the reference task data including at least one sequence data and the sequence data being created using at least some of the plurality of log data and at least some of the plurality of image data; and
a task prediction module that obtains prediction task data using at least the reference task data, wherein the task prediction module includes an encoder that outputs intermediate data by receiving the reference task data, and a decoder that obtains the intermediate data from the encoder and outputs data using the intermediate data, and the encoder includes neural networks of the number corresponding to the number of sequence data included in the reference task data.
The limitations under its broadest reasonable interpretation covers Mental Processes related to observation and evaluation of data, but for the recitation of generic computer components (e.g. electronic device). For example, obtaining work history data, obtaining reference task data, obtaining prediction task data and forecasting using neural networks involves collecting and analyzing data (e.g. observation and evaluation). Accordingly, the claim recites an abstract idea of Mental Processes.
Independent Claim 19 substantially recites the subject matter of Claim 18 and also includes the abstract idea identified above. The dependent claims encompass the same abstract ideas. For instance, Claim 2 is directed to log data classified as event log data corresponding to execution of applications programs; Claim 3 is directed to image data is classed as action image data; Claim 4 is directed to classifying log data and image data into a task group or non-task group; Claim 5 is directed to obtaining reference data; Claim 6 is directed to sequence data includes log vector data; Claim 7 is directed to obtaining tokenized log data; Claim 8 is directed to vector data; Claims 9 – 10 are directed to neural network type; Claim 11 is directed to prediction sequence data; Claims 12-13 are directed to examining prediction task data; Claim 14 is directed to similarity between prediction log data; Claim 15 is directed to creating secondary prediction sequence; Claim 16 is directed to certain log data and Claim 17 is directed to displaying infor about next task. Thus, the dependent claims further limit the abstract concepts found in the independent claims.
The judicial exceptions are not integrated into a practical application. Claim 1 recites the additional elements of an electronic device. Claim 18 recites the additional elements of a computer-readable recording medium and an electronic device. Claim 19 recites the additional element of an electronic device. These are generic computer components recited at a high level of generality as performing generic computer functions (see Spec ¶0066).
For instance, the steps of obtaining operation history data, obtaining reference task data and obtaining forecast task data are data gathering activities. The steps of a decoder that obtains the intermediate data from the encoder and outputs data using the intermediate data from an encoder that includes neural networks involves collecting and analyzing data using complex mathematical operations. Each of the additional
limitations is no more than mere instructions to apply the exception using a generic computer components (e.g. electronic device). The combination of these additional elements is no more than mere instructions to apply the exception using a generic computer component (e.g. electronic device). Therefore, the additional elements do not integrate the abstract ideas into a practical application because it does not impose meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As stated above, the additional elements of an electronic device and a computer readable medium are considered generic computer components performing generic computer functions that amount to no more than instructions to implement the judicial exception. Mere, instructions to apply an exception using generic computer components cannot provide an inventive concept.
The dependent claims when analyzed both individually and in combination are also held to be ineligible for the same reason above and the additional recited limitations fail to establish that the claims are not directed to an abstract. The additional limitations of the dependent claims when considered individually and as an ordered combination do not amount to significantly more than the abstract idea.
Looking at these limitations as an ordered combination and individually adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use generic computer components, to "apply" the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Therefore, Claims 1-19 are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 9 - 13 and 16-19 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sinha et al. (US 2022/0004898) in view of Garg et al. (US 2021/0350134).
Claim 1:
Sinha discloses:
A task prediction method that is performed in an electronic device of a user, the task prediction method comprising: (see at least ¶0031, predicted digital action)
obtaining operation history data according to use of the electronic device of the user the operation history data including a plurality of log data [and a plurality of image data]; (see at least ¶0043, past session data; see also ¶0051, track digital actions selected by users via digital actions for storage in a database; see also ¶0055, behavior logs; see also ¶0030, digital action can include )
obtaining reference task data using the operation history data the reference task data including at least one sequence data and the sequence data being created using at least some of the plurality of log data [and at least some of the plurality of image data]; and (see at least ¶0077-¶0078, the set of digital action sequences does not include all digital action sequences from the digital behavior log)
obtaining forecast task data using the reference task data and a task forecast module, wherein the task forecast module includes an encoder that outputs intermediate data by receiving the reference task data, and a decoder that obtains the intermediate data from the encoder and outputs data using the intermediate data, wherein the encoder includes neural networks of the number corresponding to the number of sequence data included in the reference task data. (see at least ¶0082-¶0083, attention neural network utilizes a decoder to analyze values generated by the session-level encoder to generate a predicted digital action sequence)
While Sinha discloses the above limitations, Sinha does not explicitly disclose capturing image data; however, Garg does disclose:
obtaining operation history data according to use of the electronic device of the user the operation history data including a plurality of log data and a plurality of image data; (see at least Abstract, capturing a screen of a worker device while worker performs a task; see also ¶0031, capturing images on worker device)
obtaining reference task data using the operation history data the reference task data including at least one sequence data and the sequence data being created using at least some of the plurality of log data and at least some of the plurality of image data; and (see at least ¶0073, plotting sequences of work activities; see also Claim 7, process map visualizing a sequence of activity labels or a plurality of images)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the collection of digital actions to create a digital behavior log and identifies digital action sequences of Sinha with the capturing of screen images as user performs task of Garg to assist in analyzing operation processes.
Claim 2:
Sinha and Garg disclose claim 1. Sinha further disclose:
wherein the plurality of log data is classified as event log data corresponding to execution of application programs stored at least in the electronic device or action log data corresponding to specific function performance in the application programs. (see at least ¶0032, digital behavior logs include digital actions selected by a user on one or more platforms, operating system or computer applications; see also ¶0071-¶0072, grouping task identifying digital actions that correspond to desired task)
Claim 3:
While Sinha and Garg disclose claim 2, Sinha does not explicitly disclose the following limitations; however, Garg does disclose:
wherein the plurality of image data is classified as action image data relating to at least the specific function performance or screen image data relating to the specific function performance, the action image data is data relating to at least some of images that are output through a screen of the electronic device, the screen image data is data relating to at least some of images that are output through the screen of the electronic device, and the size of an image corresponding to the screen image data is larger than the size of an image corresponding to the action image data. (see at least Figures 5A-5C and associated text; see also ¶0053-¶0057, image analysis module determines activity label for an image which is descriptive of the image)
Claim 4:
While Sinha and Garg disclose claim 1 and Sinha further discloses : classifying the plurality of log data [and the plurality of image data] at least into a task group or a non-task group (see at least Figure3 and ¶0071-¶0072, task-identifying digital actions are grouped), Sinha does not explicitly disclose classifying images; however Garg does disclose:
wherein the obtaining of reference task data includes: classifying the plurality of log data and the plurality of image data at least into a task group or a non-task group; and (see at least ¶0073, plotting the sequence in which activities took place based on time stamps and activity labels; see also ¶0032, the process mining system groups images int activity groups and generates visual logs of images in chronological order; see also ¶0033, determines activity label for each activity group that indicates one or more activities performed by the worker device)
creating the at least one sequence data using log data and image data classified as the task group. (see at least ¶0036, generating a process map indicating a sequence of activity; see also ¶0073, plotting the sequence in which activities took place based on time stamps and activity labels)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the grouping of working actions of Sinha with the image activity labels of Garg to associate worker activity data to captured image data, which provides more meaningful information.
Claim 5:
While Sinha and Garg disclose claim 1, Sinha does not explicitly disclose the following limitation; however, Garg does disclose:
wherein the obtaining of reference task data includes creating the sequence data by classifying the plurality of log data and the plurality of image data on the basis of at least one event log data, and the event log data that is one of the plurality of log data corresponds to execution or end of an application program stored in the electronic device. (see at least ¶0073, plotting the sequence in which activities took place based on time stamps and activity labels; see also ¶0032, the process mining system groups images int activity groups and generates visual logs of images in chronological order; see also ¶0033, determines activity label for each activity group that indicates one or more activities performed by the worker device)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the grouping of working actions of Sinha with the image activity labels of Garg to associate worker activity data to captured image data, which provides more meaningful information.
Claim 9:
Sinha and Garg disclose claim 1. Sinha further discloses:
wherein the neural network is a recurrent neural network model. (see at least ¶0063, various recurrent networks)
Claim 10:
Sinha and Garg discloses claim 1. Sinha further discloses:
wherein the neural network model is a Long-Short Term Memory (LSTM) model. (see at least ¶0081)
Claim 11:
Sinha and Garg discloses claim 1. Sinha further discloses:
wherein the obtaining of prediction task data include: creating at least one prediction sequence data by inputting the intermediate data and start data into the decoder; and creating the prediction task data using the at least one prediction sequence data, and the prediction sequence data includes at least one prediction log data. (see at least ¶0083, utilize the decoder to analyze values generated by the session level encoder and the decoder generates a predicted digital action sequence)
Claim 12:
Sinha and Garg discloses claim 1. Singh further discloses:
further comprising examining the prediction task data. (see at least ¶0028, bias detection in predicted sessions data; see also ¶0023)
Claim 13:
Sinha and Garg disclose claim 12. Sinha further discloses:
wherein the prediction task data includes at least one primary prediction sequence data the primary prediction sequence data including at least one prediction log data, and (see at least Figure 5A and ¶0105, bias detection compares the predicted digital action sequence with an observed digital action sequence)
the examining of prediction task data includes comparing the at least one prediction log data of the primary prediction sequence data with at least one certain log data. (see at least Figure 5A and ¶0105, bias detection compares the predicted digital action sequence with an observed digital action sequence)
Claim 16:
Sinha and Garg disclose claim 11. Sinha further discloses:
wherein the at least one certain log data is a portion of the plurality of log data. (see at least ¶0105, compares the predicted digital action sequence with an observed digital action sequence)
Claim 17:
Sinha and Garg disclose claim 1. Sinha further discloses:
further comprising displaying information about a next task, which should be performed after a task corresponding to the reference task data, on the electronic device on the basis of the prediction task data. (see at least ¶0025 and ¶0139)
Claim 18:
Sinha discloses:
A task flow analysis model that is stored in a computer-readable recording medium, the task flow analysis model comprising: (see at least ¶0004, non-transitory computer-readable media; see also Abstract)
a work history collection module that obtains work history data corresponding to work performed through the electronic device the work history data including a plurality of log data [and a plurality of image data]; (see at least ¶0043, past session data; see also ¶0051, track digital actions selected by users via digital actions for storage in a database; see also ¶0055, behavior logs; see also ¶0030, digital action can include )
a preprocessing module that obtains reference task data by processing the work history data - the reference task data including at least one sequence data and the sequence data being created using at least some of the plurality of log data [and at least some of the plurality of image data]; and (see at least ¶0077-¶0078, the set of digital action sequences does not include all digital action sequences from the digital behavior log)
a task prediction module that obtains prediction task data using at least the reference task data, wherein the task prediction module includes an encoder that outputs intermediate data by receiving the reference task data, and a decoder that obtains the intermediate data from the encoder and outputs data using the intermediate data, and the encoder includes neural networks of the number corresponding to the number of sequence data included in the reference task data. (see at least ¶0082-¶0083, attention neural network utilizes a decoder to analyze values generated by the session-level encoder to generate a predicted digital action sequence)
While Sinha discloses the above limitations, Sinha does not explicitly disclose capturing image data; however, Garg does disclose:
a work history collection module that obtains work history data corresponding to work performed through the electronic device the work history data including a plurality of log data and a plurality of image data; (see at least Abstract, capturing a screen of a worker device while worker performs a task; see also ¶0031, capturing images on worker device)
a preprocessing module that obtains reference task data by processing the work history data - the reference task data including at least one sequence data and the sequence data being created using at least some of the plurality of log data and at least some of the plurality of image data (see at least ¶0073, plotting sequences of work activities; see also Claim 7, process map visualizing a sequence of activity labels or a plurality of images)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the collection of digital actions to create a digital behavior log and identifies digital action sequences of Sinha with the capturing of screen images as user performs task of Garg to assist in analyzing operation processes.
Claim 19 for an electronic device (Sinha Figure 10) substantially recites the subject matter of Claim 18 for a computer readable recording medium (Sinha ¶0004) and is rejected based on the same rationale.
Claims 6, 7 and 8 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sinha et al. (US 2022/0004898) in view of Garg et al. (US 2021/0350134) further in view of Masood et al. (WO 2021/024145).
Claim 6:
While Sinha and Garg disclose claim 1, neither explicitly disclose the following limitations; however, Masood does disclose:
wherein the sequence data includes at least one log vector data obtained by processing at least some of the plurality of log data and at least one image vector data obtained by processing at least some of the plurality of image data. (see at least ¶0040-¶0043, vectorizers for converting log data into vectors; see also ¶0063)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the collection of digital actions to create a digital behavior log and identifies digital action sequences of Sinha and the capturing of screen images as user performs task of Garg with representing event instances as vectors of Masood to correlate the plurality of event vectors to identify one or more processes (Abstract).
Claim 7:
While Sinha and Garg disclose claim 1, neither explicitly disclose the following limitations; however, Masood does disclose:
wherein the obtaining of reference task data includes: obtaining at least one tokenized log data by tokenizing at least some of the plurality of log data; (see at least ¶0042, vectorizers can tokenize data extracted from adaptors and then create vectors from tokenized data)
and obtaining log vector data by embedding the tokenized log data. (see at least ¶0042, vectorizers can tokenize data extracted from adaptors and then create vectors from tokenized data; see also ¶0064)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the collection of digital actions to create a digital behavior log and identifies digital action sequences of Sinha and the capturing of screen images as user performs task of Garg with representing event instances as vectors of Masood to correlate the plurality of event vectors to identify one or more processes (Abstract).
Claim 8:
While Sinha and Garg disclose claim 1, neither explicitly disclose the following limitations; however, Masood does disclose:
wherein the task prediction module is a sequence-to-sequence module and the intermediate data is vector data obtained from the at least one sequence data of the reference task data. (see at least ¶0007, event instances are represented as event vectors; see also ¶0031)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the collection of digital actions to create a digital behavior log and identifies digital action sequences of Sinha and the capturing of screen images as user performs task of Garg with representing event instances as vectors of Masood to correlate the plurality of event vectors to identify one or more processes (Abstract).
Claims 14 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sinha et al. (US 2022/0004898) in view of Garg et al. (US 2021/0350134) further in view of Wu et al. (US 2022/0398466).
Claim 14:
While Sinha and Garg disclose claim 13, neither explicitly disclose the following limitations; however, Wu does disclose:
wherein the comparing of the at least one prediction log data with at least one certain log data includes calculating Similarity between the at least one prediction log data with the at least one certain log data using a Dynamic Time Warping (DTW) algorithm. (see at least ¶0137, event forecasting system may compare actual observed time series data to predicted time series data; see also ¶0149-¶0150, DTW)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, to combine the collection of digital actions to create a digital behavior log and identifies digital action sequences of Sinha and the capturing of screen images as user performs task of Garg with the comparison between a prediction and actual observed time series data of Wu in order to identify differences between predicted values and observed values (see ¶0137).
Conclusion
The prior art made of record and not relied upon is considered relevant but not applied:
Pirk et al. (US 2022/0331962) discloses a trained action sequence prediction model in determining a predicted sequence of actions for a robotic task based on an instance of vision data captured by a robot.
George et al. (US 2014/0115506) disclose UI action capture code may receive user interface actions such as a one or more single mouse clicks, a double mouse click, a mouse over, a mouse drag, a screen touch, a screen pinch, a scroll, a key press, key combinations, swipes, zooms, rotations, etc.
Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Renae Feacher whose telephone number is 571-270-5485. The Examiner can normally be reached Monday-Friday, 9:00 am - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the Examiner's supervisor, Beth Boswell can be reached at 571-272-6737.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197 (toll-free).
Any response to this action should be mailed to:
Commissioner of Patents and Trademarks
Washington, D.C. 20231
or faxed to 571-273-8300.
Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window:
Randolph Building
401 Dulany Street
Alexandria, VA 22314.
/Renae Feacher/
Primary Examiner, Art Unit 3625