Prosecution Insights
Last updated: April 18, 2026
Application No. 17/994,423

SCRIPT CREATION METHOD FOR ROBOT PROCESS AUTOMATION AND ELECTRONIC DEVICE USING THE SAME

Final Rejection §101§103
Filed
Nov 28, 2022
Examiner
ELLIOTT, JORDAN MCKENZIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
UNITED MICROELECTRONICS CORPORATION
OA Round
4 (Final)
45%
Grant Probability
Moderate
5-6
OA Rounds
2y 10m
To Grant
31%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
9 granted / 20 resolved
-17.0% vs TC avg
Minimal -14% lift
Without
With
+-13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-4, 6-14 and 16-20 are pending in this application. Claims 1, and 11 are amended, claims 5 and 15 are canceled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy was received on 12/6/2022. Response to Arguments U.S.C. 101 Applicant’s arguments regarding the amendments made to claims 1 and 11 (see Remarks filed 09/06/2025) have been fully considered by the examiner are not persuasive. The examiner disagrees with the applicant’s assertion that claims 1-4, 6-14 and 16-20 are drawn to significantly more than an abstract idea. Taking claim 1 as the example, the limitation of “an area defining unit, configured to obtain a recording area of a screen” is a step that could be practically performed as step of mere data gathering under step 2A prong 1, wherein a human could determine a location of interest to record a video of. Under step 2A prong 2, the claim recites the additional element of an area defining unit, however the additional elements fail to translate the claim into practical application or amounts to significantly more. Similarly, under Step 2B, the claim fails to recite any limitations which amount to significantly more an abstract idea or meaningfully translate the claim into practical application. Claim 1 further recites the following limitations; “a recording unit, configured to record a video according to the recording area; (Step of mere data gathering, in which a human could record a video of a screen) an analysis unit, configured to analyze a plurality of actions according to the video, (Mental process, where a human could reasonably look at a video and make note of the actions in the video) including: a filter, configured to filter off some frames in the video whose changes belonging to a cursor; (Mental process where a human could remove or delete image frames with a cursor changes/movement) and a creation unit, configured to build a plurality of steps of a script according to the actions; (Mental process in which a human could look at a video of an action, and write a script of the actions) wherein the analysis unit is configured to identify a window and a reference pattern thereon; (Mental process of determining a window and a cursor movement pattern) analyze a relative location of the cursor on the window relative to the reference pattern; (Mental process in which a human could look at a cursor movement and determine its location relative to a shape or pattern on the image) define an execution position of a click action on the window according to the relative location of the cursor on the window relative to the reference pattern, wherein the reference pattern is different from the cursor; (Mental process of determining a mouse position for execution and the cursor’s relative location from a pattern or shape or icon on the screen) and record the click action with the execution position on the window as one of the actions. (Mere data gathering where a person could record a mouse click in the script, they are generating based on watching a video)” The steps above are drawn to either a mental process or step of mere data gathering under step 2A prong 1 as noted by the examiner because a human could reasonably look at a screen recording of a user completing a task, and make notes of the icon/reference pattern being click on, and the position of the mouse. Under step 2A prong 2, the claim recites the additional elements of “a recording unit”, “an analysis unit”, “a filter” and “a creation unit”, however the additional elements fail to translate the claim into practical application or amounts to significantly more. Similarly, under Step 2B, the claim fails to recite any limitations which amount to significantly more an abstract idea or meaningfully translate the claim into practical application. Therefore, at this time, Examiner maintains all rejections made under 35 U.S.C. 101. The applicant is encouraged to amend the above claims to add in additional elements which meaningfully translate the claims into practical application. 35 U.S.C. 103 Applicant’s arguments (see Remarks filed 12/30/2025) regarding the rejections made to claim 11 under 35 U.S.C. 103 over Jin in view of Klementiev have been fully considered by the examiner and are not persuasive. Applicant argues that Klementiev fails to teach every limitation of claim 11 because no frames in are filtered off. The examiner respectfully disagrees that Klementiev fails to meet the limitations of claim 11, the claim recites “wherein in the step of analyzing the plurality of actions, some frames in the video whose changes belonging to a cursor are filtered off”, the broadest reasonable interpretation of the claim is that changes that are occurring in video frames which are a result of the cursor, such as cursor movement or “click” are removed. Klementiev teaches that a User Interface (UI) recording system is used to generate a recording of the user’s actions, which inherently uses video to capture the screen recording and therefore inherently has frames (See Klementiev, [0016]). Further, Klementiev [0041]-[0042] teaches that the system uses the UI recording system to record the user’s actions, and generate a set of steps based off of these actions, which one of ordinary skill in the art would understand as being analogous to creating a plurality of actions based on changes occurring in the video. Further, these actions are analyzed in [0050]-[0051] of Klementiev and filtered based on recorded actions, where the recording uses a UI recording tool which is a video recording. Finally, Klementiev in [0073] and [0076] teaches that the actions collected which may be cursor actions as defined by the user are filtered or removed. One of ordinary skill in the art would understand that given the capacity of the system of Klementiev to record the user interface using a UI recording tool, which inherently uses video/frame data, and then create, analyze and filter a set of actions from this recording, that the methods of Klementiev are functionally equivalent to those taught in amended claim 11 which recites “wherein in the step of analyzing the plurality of actions, some frames in the video whose changes belonging to a cursor are filtered off”. PNG media_image1.png 146 306 media_image1.png Greyscale PNG media_image2.png 278 310 media_image2.png Greyscale (Klementiev, [0016]) PNG media_image3.png 434 306 media_image3.png Greyscale (Klementiev, [0041]-[0042]) PNG media_image4.png 368 302 media_image4.png Greyscale (Klementiev, [0050]-[0051]) PNG media_image5.png 186 304 media_image5.png Greyscale (Klementiev, [0073]) PNG media_image6.png 306 306 media_image6.png Greyscale (Klementiev, [0076]) Further, the Applicant argues that Jin fails to disclose a reference pattern which is different from the cursor. According to figure 13 and [0038] of the applicant’s specification, the reference pattern is a location on the screen which appears relative the cursor, and is used to define and execution position for a click action, for example the reference pattern is the location that must be clicked in order for something to occur or being executed. Jin teaches in [0091] and [0092] that the system allows for the automation of an event such as when a cursor or mouse clicks on an image, where the reference pattern in this example is the image being clicked on, and the cursor and cursor position are also recorded. Further Jin teaches in [0111] –[0113] details the process of a user opening a browser, logging into a site and performing tasks such as executing programs can be automated using the cursor position and an image icon (reference pattern) for the program execution, which is analogous to a cursor position and a touch point as taught in amended claim 1 and 11. Therefore, at this time the examiner respectfully maintains the rejections made under 35 U.S.C. 103 over Jin in view of Klementiev. PNG media_image7.png 416 426 media_image7.png Greyscale (Figure 13 of the applicant’s specification, emphasis added) PNG media_image8.png 594 680 media_image8.png Greyscale (Jin, [0091]-[0095]) PNG media_image9.png 458 660 media_image9.png Greyscale (Jin, [0111]-[0113]) Claim Interpretation 1. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Area defining unit of claim 1 (See [0026] of applicant’s specification). recording unit of claim 1 (See [0026] of applicant’s specification). analysis unit of claim 1 (See [0026] of applicant’s specification). creation unit of claim 1 (See [0026] of applicant’s specification). Action analyzer of claim 2, 3, 4, 5, 6, and 7 (See [0026] of applicant’s specification). because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-14 and 16-20 are rejected under U.S.C. 101 as being directed to an abstract idea without significantly more. Regarding claims 1 and 11, claims 1 and 11 are directed to the abstract ideas, mental processes or mere data gathering. Specifically, the claims recite the limitations “to obtain a recording area of a screen”, which constitutes mere data gathering. The limitation;” configured to record a video according to the recording area” which also constitutes mere data gathering. The claims additionally recite the limitations “to analyze a plurality of actions according to the video, including: a filter to configured to filter off changes belonging to a cursor”, which is a mental process and “to build a plurality of steps of a script according to the actions” which is drawn to the mental process of looking at actions on a screen and writing a script from them. Further, the limitations; “wherein the analysis unit is configure to identify a window and a reference pattern thereon; analyze a relative location of the cursor on the window relative to the reference pattern; define an execution position of a click action on the window according to the relative location of the cursor on the window relative to the reference pattern; and record the click action with the execution position on the window as one of the actions” which are mental processes or steps of mere data gathering drawn to recording a screen area and determining where a mouse is clicking. Under step 2A prong 2, the claim recites additional elements the additional element of a filter, which fails to integrate the judicial exception into practical application. Under step 2B, the claim does not include elements that amount to significantly more than an abstract idea. See MPEP section 2106. Dependent claims 2-10 and 11-20 do not add limitations that meaningfully translate the abstract idea into a practical application or add significantly more. Regarding claims 2 and 12, the claims add the limitations; “Configured to extract a plurality of frames from the video” “configured to analyze a plurality of changes in the frames” “configured to define a plurality of segmentation nodes from the frames, wherein the changes in each of the segmentation nodes are greater than a predetermined degree” “configured to obtain the actions between the segmentation nodes, which are adjacent” The limitation recited by claims 2 and 12 are drawn to either mental processes or mere data gathering without significantly more. Regarding claims 3 and 13, the claims add the limitations; “records a newly added text in the changes between the segmentation nodes, which are adjacent, to obtain a text input action” Which is drawn to mere data gathering without significantly more. Regarding claims 4 and 14, the claims add the limitation; “record newly added text by using optical character recognition (OCR) technology” Which is drawn to a mental process. The usage of OCR technology is not enough to translate the claims into practical application or amount to significantly more. Regarding claims 6 and 16, the claims add the limitation; “records a newly added rectangular frame in each of the changes between the segmentation nodes, which are adjacent, to obtain a circle action” which is drawn to mere data gathering without significantly more. Regarding claims 7 and 17, the claims add the limitation; “records a newly added highlighted area in each of the changes between the segmentation nodes, which are adjacent, to obtain a text highlight action.” Which is drawn to both mere data gathering and the mental process of examining changes in data without significantly more. Regarding claims 8 and 18, the claims add the limitation; “wherein the recording area is a scope of a remote control window.” Which is drawn to mere data gathering without significantly more. Regrading claims 9 and 19, the claims add the limitation; “wherein the remote control window displays an interface of a semiconductor machine located at a remote end, the recording area is a partial scope of the screen, and the recording area is an entire scope of the interface of the semiconductor machine” which is drawn to mere data gathering without significantly more. Regarding claims 10 and 20, the claims add the limitation; “wherein the actions of the script are configured to be executed on the remote control window.” Which is drawn to mere data gathering and the mental process/action of executing a script, without significantly more. These claims add limitations that fail to tie the abstract ideas into practical application or add significantly more, including decision making, mental processes and data gathering. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 3. Claim 1 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Jin (KR 102307471 B1) in view of Klementiev (US 20050278728 A1). Regarding claim 1 Jin discloses; An electronic device, comprising: an area defining unit (Jin [0015] a template tool module, automates a specific area, Applicant defined the units as being “realized by a circuit, a chip, a circuit board, a code, a computer program product or a storage device for storing code, therefore the template tool module would be analogous to this), configured to obtain a recording area of a screen (Jin [0015] The template tool module is responsible for producing a template to automate a task for the area within the template); a recording unit (Jin [0015] Modeling tool module, Applicant defined the units as being “realized by a circuit, a chip, a circuit board, a code, a computer program product or a storage device for storing code, therefore the Modeling tool module would be analogous to this), configured to record a video according to the recording area (Jin [0015] records task processes to be automated); an analysis unit (Jin, [0025] processing unit, Applicant defined the units as being “realized by a circuit, a chip, a circuit board, a code, a computer program product or a storage device for storing code, therefore the processing unit would be analogous to this), configured to analyze a plurality of actions according to the video (Jin, [0025] the processing unit analyze received data from the interface so a script can be generated), [including: a filter, configured to filter off changes belonging to a cursor ;] and a creation unit (Jin, [0027] script writing unit, Applicant defined the units as being “realized by a circuit, a chip, a circuit board, a code, a computer program product or a storage device for storing code, therefore the script recorder would be analogous to this), configured to build a plurality of steps of a script according to the actions (Jin [0027] the script writing unit writes a custom script of the task a user wishes to have automated); wherein the analysis unit is configured to identify a window and a reference pattern thereon (Jin, [0018] the modeling tool module (action analyzer) uses computer vision to analyze at least one image of the computer display screen (window), [0019] the image must contain a region corresponding to a position of a cursor and/or a touch point (Cursor position and touch point being a click action), [0111]-[0113] the system may characterize an event by a click position and an icon corresponding to a program being executed (analogous to a reference pattern); analyze a relative location of the cursor on the window relative to the reference pattern (Jin, [0018] the modeling tool module (action analyzer) uses computer vision to analyze at least one image of the computer display screen (window), [0019] the image must contain a region corresponding to a position of a cursor and/or a touch point (Cursor position and touch point being a pattern/click action), [0084] the system recognizes inputs via a mouse click, [0111]-[0113] the system may characterize an event by a click position and an icon corresponding to a program being executed (analogous to a reference pattern); define an execution position of a click action on the window according to the relative location of the cursor on the window relative to the reference pattern, wherein the reference pattern is different from the cursor (Jin, [0018] the modeling tool module (action analyzer) uses computer vision to analyze at least one image of the computer display screen (window), [0019] the image must contain a region corresponding to a position of a cursor and/or a touch point, [0111]-[0113] the system may characterize an event by a click position (cursor location) and an icon corresponding to a program being executed (analogous to a reference pattern); and record the click action with the execution position on the window as one of the actions (Jin, [0079] left or right mouse clicks are recorded along with the X and Y click coordinates [0084] the system recognizes inputs via a mouse click). Jin does not teach; including: a filter, configured to filter off changes belonging to a cursor; However, in the same field of endeavor of script creation, Klementiev teaches; including: a filter, configured to filter off changes belonging to a cursor (Klementiev, [0016] the system uses User Interface Recording software which capture the inputs as a video [0051] input data is derived from user actions such as mouse movements and clicks, [0073] the collected user input data can be filtered, [0076] the user can filter out certain patterns of the input data such as clicking or expanding a certain menu. Since the user or the system can filter the data collected such as mouse movements and clicks based on user specification or based on the script creation goals, this would be analogous to filtering the mouse clicks off.); The combination of Jin and Klementiev would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Jin teaches a method of generating a script to execute actions being performed by a user and recorded, but it does not adequately teach filtering off mouse clicks using a filter. Klementiev teaches this deficiency, the motivation for the addition of this feature of Klementiev lies in that the ability to filter the mouse actions that are not relevant to what is being input into the script. (Klementiev, [0018]- [0019], [0025] and [0081]- [0084]) Regarding claim 11, the combination of Jin and Klementiev teaches; A script creation method for robotic process automation (RPA), comprising: obtaining a recording area of a screen (Jin [0015] The template tool module is responsible for producing a template to automate a task for the area within the template); recording a video according to the recording area (Jin [0015] records task processes to be automated); analyzing a plurality of actions according to the video (Jin [0015] records task processes to be automated), wherein in the step of analyzing the plurality of actions, some frames in the video whose changes belonging to the cursor are filtered off (Klementiev, [0016] the system uses User Interface Recording software which capture the inputs as a video [0051] input data is derived from user actions such as mouse movements and clicks, [0073] the collected user input data can be filtered, [0076] the user can filter out certain patterns of the input data such as clicking or expanding a certain menu. Since the user or the system can filter the data collected such as mouse movements and clicks based on user specification or based on the script creation goals, this would be analogous to filtering the mouse clicks off.); and building a plurality of steps of a script according to the actions (Jin [0027] the script writing unit writes a custom script of the task a user wishes to have automated); wherein the analysis unit is configure to identify a window and a reference pattern thereon (Jin, [0018] the modeling tool module (action analyzer) uses computer vision to analyze at least one image of the computer display screen (window), [0019] the image must contain a region corresponding to a position of a cursor and/or a touch point (Cursor position and touch point being a pattern/click action), [0084] the system recognizes inputs via a mouse click, which is analogous to a click pattern on the reference window, [0051] the system capabilities include OCR and pattern recognition); analyze a relative location of the cursor on the window relative to the reference pattern (Jin, [0018] the modeling tool module (action analyzer) uses computer vision to analyze at least one image of the computer display screen (window), [0019] the image must contain a region corresponding to a position of a cursor and/or a touch point (Cursor position and touch point being a pattern/click action), [0084] the system recognizes inputs via a mouse click, [0111]-[0113] the system may characterize an event by a click position (cursor location) and an icon corresponding to a program being executed (analogous to a reference pattern); define an execution position of a click action on the window according to the relative location of the cursor on the window relative to the reference pattern, wherein the reference pattern is different from the cursor (Jin, [0018] the modeling tool module (action analyzer) uses computer vision to analyze at least one image of the computer display screen (window), [0019] the image must contain a region corresponding to a position of a cursor and/or a touch point, [0111]-[0113] the system may characterize an event by a click position (cursor location) and an icon corresponding to a program being executed (analogous to a reference pattern); and record the click action with the execution position on the window as one of the actions (Jin, [0079] left or right mouse clicks are recorded along with the X and Y click coordinates [0084] the system recognizes inputs via a mouse click). The combination of Jin and Klementiev would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Jin teaches a method of generating a script to execute actions being performed by a user and recorded, but it does not adequately teach filtering off mouse clicks using a filter. Klementiev teaches this deficiency, the motivation for the addition of this feature of Klementiev lies in that the ability to filter the mouse actions that are not relevant to what is being input into the script. (Klementiev, [0018]- [0019], [0025] and [0081]- [0084]) 4. Claims 2-4, 8, 10, 12-13, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable Jin (KR 102307471 B1) in view of Klementiev (US 20050278728 A1) and in further view of Otterness (20210224950 A1). Regarding claim 2 the combination of Jin and Klementiev teaches; The electronic device according to claim 1, wherein the analysis unit comprises: and an action analyzer (Jin, [0025], the processing unit), configured to obtain the actions between the segmentation nodes, which are adjacent (Jin [0025] processing unit analyzes actions inputs across elements (segmentation nodes)). Jin does not teach; an extractor, configured to extract a plurality of frames from the video; a comparator, configured to analyze a plurality of changes in the frames; a divider, configured to define a plurality of segmentation nodes from the frames, wherein the changes in each of the segmentation nodes are greater than a predetermined degree; However, in the same field of endeavor Otterness teaches; an extractor (Otterness, [0033], event recognition module is performing an equivalent function to an extractor), configured to extract a plurality of frames from the video (Otterness, the module may receive all frames and analyze a sampling of the frames); a comparator (Otterness, [0033] the event recognition module, performs equivalent function to a comparator), configured to analyze a plurality of changes in the frames (Otterness, [0033] one or more regions of the frame be analyzed for additional information, icons can be detected between frames); a divider (Otterness, [0033] the event recognition module is performing a functionally equivalent task to a divider), configured to define a plurality of segmentation nodes from the frames, wherein the changes in each of the segmentation nodes are greater than a predetermined degree (Otterness, [0033] one or more regions of the frame be analyzed for additional information (Regions of the frame are being interpreted as functionally equivalent to a segmentation node since they are breaking the frame into portions as shown in applicant’s figures 7-10), icons can be detected between frames [0038] the frames can be processed to detect changes in select regions of the game, for example, the appearance of a certain number of icons can indicate an event); The combination of Jin, Klementiev and Otterness would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the frame analysis of Otterness would allow regions of video frames to be analyzed for action inputs as described in Jin. (Otterness [0033], [0035], and [0038]) Regarding claim 3 the combination of Jin, Klementiev and Otterness, The electronic device according to claim 2, wherein the action analyzer records a newly added text in the changes between the segmentation nodes, which are adjacent, to obtain a text input action (Jin, [0025] Processing unit extra elements of a sequence of events (segmentation node) [0049] user’s work is recorded repeatedly [0050] the modeling tool module (Action analyzer) can apply OCR to images captured from user’s actions [0051] OCR is used to extract text and perform pattern recognition from the user’s action). Regarding claim 4 the combination of Jin, Klementiev and Otterness; The electronic device according to claim 3, wherein the action analyzer obtains the newly added text by using an optical character recognition (OCR) technology (Jin, [0049] user’s work is recorded repeatedly [0050] the modeling tool module (Action analyzer) can apply OCR to images captured from user’s actions [0051] OCR is used to extract text and perform pattern recognition from the user’s action). Regarding claim 8 the combination of Jin, Klementiev and Otterness teaches; The electronic device according to claim 1, wherein the recording area is a scope of a remote-control window (Jin, figures 2 and 3 show a remote-control window and other programs being used by the user and recorded). PNG media_image10.png 556 604 media_image10.png Greyscale (Jin figure 3 emphasis added) Regarding claim 10 the combination of Jin, Klementiev and Otterness teaches; The electronic device according to claim 8, wherein the actions of the script are configured to be executed on the remote-control window (Jin, [0001] the actions are designed to be executed on a user terminal). Regarding claim 12 the combination of Jin, Klementiev and Otterness teaches; The script creation method for robot process automation according to claim 11, wherein the step of analyzing the actions according to the video comprises: extracting a plurality of frames from the video (Otterness, the module may receive all frames and analyze a sampling of the frames); analyzing a plurality of changes in the frames (Jin, [0051] Use of OCR extraction to extract information from videos or images, [0127] data scraping methods (extraction) can be used on binary data from images or multimedia data, examiner is interpreting the use of multiple extraction methods as the system having one or more extractors, capable of extracting data/frames from a video); defining a plurality of segmentation nodes from the frames, wherein the changes in each of the segmentation nodes are greater than a predetermined degree (Otterness, [0033] one or more regions of the frame be analyzed for additional information (Regions of the frame are being interpreted as functionally equivalent to a segmentation node since they are breaking the frame into portions as shown in applicant’s figures 7-10), icons can be detected between frames [0038] the frames can be processed to detect changes in select regions of the game, for example, the appearance of a certain number of icons can indicate an event); and obtaining the actions between the segmentation nodes, which are adjacent (Jin [0025] processing unit analyzes actions inputs across elements (segmentation nodes)). The combination of Jin, Mounty and Otterness would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the frame analysis of Otterness would allow regions of video frames to be analyzed for action inputs as described in Jin. (Otterness [0033], [0035], and [0038]) Regarding claim 13 the combination of Jin, Klementiev and Otterness teaches; The script creation method for robot process automation according to claim 12, wherein a newly added text in the changes between the segmentation nodes, which are adjacent, is recorded to obtain a text input action. (Jin, [0025] Processing unit extra elements of a sequence of events (segmentation node) [0049] user’s work is recorded repeatedly [0050] the modeling tool module (Action analyzer) can apply OCR to images captured from user’s actions [0051] OCR is used to extract text and perform pattern recognition from the user’s action). Regarding claim 14 the combination of Jin, Klementiev and Otterness teaches; The script creation method for robot process automation according to claim 13, wherein the newly added text is obtained by using an optical character recognition (OCR) technology (Jin, [0049] user’s work is recorded repeatedly [0050] the modeling tool module (Action analyzer) can apply OCR to images captured from user’s actions [0051] OCR is used to extract text and perform pattern recognition from the user’s action). Regarding claim 18 the combination of Jin, Klementiev and Otterness teaches; The script creation method for robot process automation according to claim 11, wherein the recording area is a scope of a remote-control window (Jin, figures 2 and 3 show a remote-control window and other programs being used by the user and recorded). PNG media_image10.png 556 604 media_image10.png Greyscale (Jin figure 3 emphasis added) Regarding claim 20 the combination of Jin, Klementiev and Otterness teaches; The script creation method for robot process automation according to claim 18, wherein the actions of the script are configured to be executed on the remote-control window (Jin, [0001] the actions are designed to be executed on a user terminal). 5. Claims 6-7 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Jin (KR 102307471 B1) in view of Klementiev (US 20050278728 A1), Otterness (20210224950 A1) and in further view of Butin (US 2010205529 A1). Regarding claim 6 the combination of Jin, Klementiev and Otterness fails to teach; The electronic device according to claim 2, wherein the action analyzer records a newly added rectangular frame in each of the changes between the segmentation nodes, which are adjacent, to obtain a circle action. However, in the same field of endeavor, Butin teaches; The electronic device according to claim 2, wherein the action analyzer records a newly added rectangular frame in each of the changes between the segmentation nodes, which are adjacent, to obtain a circle action (Butin, [0156] The system defines an rectangle area around the mouse pointer to obtain the mouse action, which is functionally equivalent to a circle action since this could obtain any pattern the mouse clicks make, including but not limited to a circle.) The combination of Jin, Klementiev, Otterness and Butin would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the use a predefined analysis as described in Butin would allow the mouse action or pattern of mouse clicks to be determined more accurately. (Butin, [0154]- [0156]) Regarding claim 7 The combination of Jin, Klementiev, Otterness and Butin teaches; The electronic device according to claim 2, wherein the action analyzer records a newly added highlighted area in each of the changes between the segmentation nodes, which are adjacent, to obtain a text highlight action (Butin, [0093] the user is able to input new text queries to be analyzed at different points in using the system, this is functionally equivalent to highlighting because new text is being selected/input to be recognized,[0100] text can be searched for and matched based upon the user’s query [0101] resultant text may be colored to show the user the result (text highlight action)). The combination of Jin, Klementiev, Otterness and Butin would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Motivation for the combination lies in that the method of Butin allows the user to input select keywords or questions and have them be queued up for the user based upon the selected words. Since the user can select text to be queued up and displayed in the results, one of ordinary skill in the art would understand that this is functionally equivalent to highlighting text and recording the highlighted text because in both cases text is newly selected and displayed, and the results are recorded by the system. (Butin, [0093], [0100] and [0101]) Regarding claim 16 The combination of Jin, Klementiev, Otterness and Butin teaches; The script creation method for robot process automation according to claim 12, wherein a newly added rectangular frame in each of the changes between the segmentation nodes, which are adjacent, is recorded to obtain a circle action(Butin, [0156] The system defines an rectangle area around the mouse pointer to obtain the mouse action, which is functionally equivalent to a circle action since this could obtain any pattern the mouse clicks make) The combination of Jin, Klementiev, Otterness and Butin would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the use a predefined analysis as described in Butin would allow the mouse action or pattern of mouse clicks to be determined more accurately. (Butin, [0154]- [0156]) Regarding claim 17 The combination of Jin, Klementiev, Otterness and Butin teaches; The script creation method for robot process automation according to claim 12, wherein a newly added highlighted area in each of the changes between the segmentation nodes, which are adjacent, is recorded to obtain a text highlight action (Butin, [0093] the user is able to input new text queries to be analyzed at different points in using the system, this is functionally equivalent to highlighting because new text is being selected/input to be recognized,[0100] text can be searched for and matched based upon the user’s query [0101] resultant text may be colored to show the user the result (text highlight action)). The combination of Jin, Klementiev, Otterness and Butin would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Motivation for the combination lies in that the method of Butin allows the user to input select keywords or questions and have them be queued up for the user based upon the selected words. Since the user can select text to be queued up and displayed in the results, one of ordinary skill in the art would understand that this is functionally equivalent to highlighting text and recording the highlighted text because in both cases text is newly selected and displayed, and the results are recorded by the system. (Butin, [0093], [0100] and [0101]) 6. Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jin (KR 102307471 B1) in view of Klementiev (US 20050278728 A1), Otterness (20210224950 A1) and in further view of Sano (US 20090150794 A1). Regarding claim 9 the combination of Jin, Klementiev and Otterness fails to teach; The electronic device according to claim 8, wherein the remote-control window displays an interface of a semiconductor machine located at a remote end, the recording area is a partial scope of the screen, and the recording area is an entire scope of the interface of the semiconductor machine. However, in the same field of endeavor Sano teaches; The electronic device according to claim 8, wherein the remote control window displays an interface of a semiconductor machine located at a remote end (Sano, Figure 10, shows a block diagram of an interface showing the semiconductor exposure apparatus, which shows information about the semiconductor and its status, as well as showing parts of the semiconductor, [0004] details the use of remote semiconductor modeling during manufacturing, [0009] semiconductor exposure apparatus is used to monitor control parameters and visualize the semiconductor components during manufacture (semiconductor interface)), the recording area is a partial scope of the screen (Sano Figure 5, shows the screen with the semiconductor interface displayed, while the use of screen recording is not explicitly disclosed, it would be obvious to one of ordinary skill in the art that continuous monitoring of a device could also be recorded), and the recording area is an entire scope of the interface of the semiconductor machine (Sano figure 10 shows both the semiconductor exposure apparatus and the remote control terminal). The combination of Jin, Klementiev, Otterness and Sano would have been obvious to one of ordinary skill in the art of robotic automation of semiconductor interface manufacturing and testing prior to the effectively filed date of the presently claimed invention. The Script generation method of Jin, and Otterness would have been improved by the manufacturing and monitoring system of Sano because one of ordinary skill in the art wanting to automate the process of semiconductor testing would have use the semiconductor monitoring and testing method of Sano and combined it with the recording and script generation methods of Jin and Otterness to create a system with functionally equivalent capabilities to those disclosed in claim 9. Additionally, while Sano does not explicitly disclose the use of screen recording in monitoring the semiconductor interface during manufacture and testing, it does teach remote monitoring and the use of user control and testing using a terminal, therefore one of ordinary skill in the art could have reasonably added the step of screen recording given the disclosures of Jin and Otterness. (Sano [0001]- [0009]) Regarding claim 19 the combination of the combination of Jin, Klementiev, and Otterness fails to teach; The script creation method for robot process automation according to claim 18, wherein the remote-control window displays an interface of a semiconductor machine located at a remote end, the recording area is a partial scope of the screen, and the recording area is an entire scope of the interface of the semiconductor machine. However, in the same field of endeavor, Sano teaches; The script creation method for robot process automation according to claim 18, wherein the remote control window displays an interface of a semiconductor machine located at a remote end (Sano, Figure 10, shows a block diagram of an interface showing the semiconductor exposure apparatus, which shows information about the semiconductor and its status, as well as showing parts of the semiconductor, [0004] details the use of remote semiconductor modeling during manufacturing, [0009] semiconductor exposure apparatus is used to monitor control parameters and visualize the semiconductor components during manufacture (semiconductor interface)), the recording area is a partial scope of the screen (Sano Figure 5, shows the screen with the semiconductor interface displayed, while the use of screen recording is not explicitly disclosed, it would be obvious to one of ordinary skill in the art that continuous monitoring of a device could also be recorded), and the recording area is an entire scope of the interface of the semiconductor machine (Sano figure 10 shows both the semiconductor exposure apparatus and the remote control terminal). The combination of Jin, Klementiev, Otterness and Sano would have been obvious to one of ordinary skill in the art of robotic automation of semiconductor interface manufacturing and testing prior to the effectively filed date of the presently claimed invention. The Script generation method of Jin and Otterness would have been improved by the manufacturing and monitoring system of Sano because one of ordinary skill in the art wanting to automate the process of semiconductor testing would have use the semiconductor monitoring and testing method of Sano and combined it with the recording and script generation methods of Jin and Otterness to create a system with functionally equivalent capabilities to those disclosed in claim 19. Additionally, while Sano does not explicitly disclose the use of screen recording in monitoring the semiconductor interface during manufacture and testing, it does teach remote monitoring and the use of user control and testing using a terminal, therefore one of ordinary skill in the art could have reasonably added the step of screen recording given the disclosures of Jin and Otterness. (Sano [0001]- [0009]) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 8,725,845 B2, Moorer, teaches a robotic automation system in which hardware in a user’s home is remote monitored and scripts are generated to be executed on a terminal. (Pertinent to claims 1-5) US 2010/0149192 A1, Kota, teaches a method of creating and parsing an action script to analyze image data. (pertinent to claims 1-3) Mounty, US 20130019170 A1 Teaches screen recording and filtering of frames Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
Apr 08, 2025
Non-Final Rejection — §101, §103
Jul 01, 2025
Response Filed
Jul 23, 2025
Final Rejection — §101, §103
Sep 26, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Oct 17, 2025
Non-Final Rejection — §101, §103
Dec 30, 2025
Response Filed
Feb 13, 2026
Final Rejection — §101, §103
Apr 15, 2026
Response after Non-Final Action
Apr 15, 2026
Request for Continued Examination

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573117
METHOD AND DEVICE FOR DEEP LEARNING-BASED PATCHWISE RECONSTRUCTION FROM CLINICAL CT SCAN DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12475998
SYSTEMS AND METHODS OF ADAPTIVELY GENERATING FACIAL DEVICE SELECTIONS BASED ON VISUALLY DETERMINED ANATOMICAL DIMENSION DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12450918
AUTOMATIC LANE MARKING EXTRACTION AND CLASSIFICATION FROM LIDAR SCANS
2y 5m to grant Granted Oct 21, 2025
Patent 12437415
METHODS AND SYSTEMS FOR NON-DESTRUCTIVE EVALUATION OF STATOR INSULATION CONDITION
2y 5m to grant Granted Oct 07, 2025
Patent 12406358
METHODS AND SYSTEMS FOR AUTOMATED SATURATION BAND PLACEMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
45%
Grant Probability
31%
With Interview (-13.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month