Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,196

OPERATION LOG GENERATION DEVICE AND OPERATION LOG GENERATION METHOD

Non-Final OA §101§103§112
Filed
Aug 07, 2023
Examiner
AZIMA, SHAGHAYEGH
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
286 granted / 350 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the applicant's communication filed on 08/07/2023. In virtue of this communication, claims 1-5 filed on 08/07/2023 are currently pending in the instant application. Claims 1-5 have been amended in a preliminary amendment filed on 08/07/2023. Information Disclosure Statement The information Disclosure statement (IDS) form PTO-1449, filed on 08/07/2023 are in compliance with the provisions of CFR 1.97. Accordingly, the information disclosed therein was considered by the examiner. Drawings The drawings were received on 08/07/2023 have been reviewed by Examiner and they are acceptable. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, limitation “GUI”, line 7, has not been defied upon its first use in the claim. Any abbreviation needs to be defined at its first mention in the claim limitations. Also specification does not define the GUI. Please clarify. Claim 5, limitation “GUI”, line 6, has not been defied upon its first use in the claim. Any abbreviation needs to be defined at its first mention in the claim limitations. Also specification does not define the GUI. Please clarify. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent Claims 1 and 5 recite detecting an operation event and acquiring an occurrence position of the operation event in a captured image of an operation screen; specifying an image of the occurrence position of the operation event, from among candidate images, for a GUI component extracted from the captured image and recording the image and the operation event in association with each other; classifying a set of recorded images into clusters based on similarities between the images; and generating an operation log by using an image corresponding to the operation event of each classified cluster. Step 1: With regard to Step 1, the instant claims are directed to an apparatus and a method, both among the statutory categories of invention. Step 2A — Prong 1: With regard to Step 2A — Prong 1, for example in method Claim 5, the limitations “detecting an operation event and acquiring an occurrence position of the operation event in a captured image of an operation screen”, “specifying an image of the occurrence position of the operation event, from among candidate images, for a GUI component extracted from the captured image and recording the image and the operation event in association with each other”, and “classifying a set of recorded images into clusters based on similarities between the images; and generating an operation log by using an image corresponding to the operation event of each classified cluster” as recited, is a method that, under its broadest reasonable interpretation, covers performance of the limitation in the mind/observation of a person click on an image presented and inspect what will happen and cluster the similar reactions in same group and memorize the result and relation. That is, other than reciting “by a computing device" nothing in the claim steps preclude the limitations from practically being performed in the mind or through observation of a person of an image displayed. The recited computer is simply a generic device. If a claim limitation, under its broadest reasonably interpretation covers performance of the limitation in the mind but for the recitation of a generic components, then it falls within the "Mental processes" grouping of the abstract idea, which include concepts performed in the human mind, including an observation, evaluation, judgement, opinion. Accordingly, the claim recites an abstract idea. In addition, the additional components recited in independent Claims 1, i.e., computing device is simply generic computing components, accordingly, these independent claims include the above- described abstract idea. Step 2A — Prong 2: The 2019 PEG defines the phrase “integration into a practical application’ to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements in the claims do not apply, rely on, or use the judicial exception. This judicial exception is not integrated into a practical application because the claims only recite additional elements using one or more computing devices, for instance, that includes to perform the recited elements/functions/steps. These computing components in all are recited at high-level of generality and there are no other recited additional limitations in the claims. Accordingly, these additional steps/element do not integrate the abstract idea into a practical application because it is a field-of-use limitation that does not impose any meaningful limits on practicing the abstract idea. Therefore, independent Claims 1 and 5 recite an abstract idea. Step 2B: Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional element of using one or more computing devices to execute programming instructions to perform the step amounts to no more than mere instructions to apply the exception using a generic apparatus component. Mere instructions to apply an exception using generic apparatus component cannot provide an inventive concept. The claim is not patent eligible. Further, with regard to dependent Claims 2-4 viewed individually, these additional elements are under their broadest reasonable interpretation, cover performance of the limitation in the mind and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, Claims 1-5 are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Masahiro (JP2019106093A), in view of Kogan et al. (US 2018/0174288), further in view of Chiu et al. (US 2023/0036609). As per claim 1, An operation log generation device, comprising: “an acquisition unit, implemented using one or more computing devices, configured to detect an operation event and acquire an occurrence position of the operation event in a captured image of an operation screen;”(Masahiro, ¶[0030] discloses The operation 123 stores operation information acquired when a specific operation (mouse operation or keyboard operation) is received from the input device 21. When the input device 21 is operated by a mouse, the coordinates (X, Y) on the screen of the display device 20 are stored. If the input device 21 is operated by a keyboard, the accepted character code or character is stored. The target object 124 stores the name of an element to be operated, such as an input field. ¶[0036-0037] discloses If the mouse operation is a mouse click (specific operation), the log recorder 110 acquires the click coordinates (X, Y) and adds information such as the date and time and the name of the active application as a log to the log table 120. The click coordinates (X, Y) are absolute coordinates on the screen of the display device 20. The log collection unit 110 stores the date and time in time 121 of the log table 120, stores the name of the active application in target application 122, stores the click position in operation123. ) “a specifying unit, implemented using one or more computing devices, configured to specify an image of the occurrence position of the operation event, from among candidate images for a GUI component extracted from the captured image and record the image and the operation event in association with each other;”(Masahiro, ¶[0030] discloses The operation 123 stores operation information acquired when a specific operation (mouse operation or keyboard operation) is received from the input device 21. When the input device 21 is operated by a mouse, the coordinates (X, Y) on the screen of the display device 20 are stored. If the input device 21 is operated by a keyboard, the accepted character code or character is stored. The target object 124 stores the name of an element to be operated, such as an input field.¶[0038-0039] discloses the log collecting unit 110 stores the date and time in the time 121 of the log table 120, stores the active application name in the target application 122, stores the click position in the operation 123, and transmits the name of the operation target to the target object 124. Then, the reference position of the application 114 is stored in the application position 125, and the window size of the application 114 is stored in the application size 126. In step S12, the image capture unit 111 captures the image of the active application 114, stores the captured image in the auxiliary storage device 13, and writes the storage destination in the captured image path 127 of the log table 120. The captured image may be an image of the entire screen of the display device 20 or an image of the active window of the application. Further see ¶[0041-0042]). However Masahiro does not explicitly disclose the following which would have been obvious in view of Kogan form similar felid of endeavor “a classifying unit, implemented using one or more computing devices, configured to classify a set of recorded images into clusters based on similarities between the images;” (Kogan, ¶[0056-0057] discloses as a result of the above approaches, a collection of screen images of an application under test are obtained, grouped by categories found by the clustering technique. For example, if the screen image includes menu UI elements 616, one of the clusters (first cluster 622) may contain the images of all menu UI elements 616 used in the application. Such clustering is possible without needing to identify that the specific UI elements happen to be menus in particular. Rather, the clustering can detect that those UI elements (which happen to be menu UI elements 616) have the same look and feel, and put them in the same group/cluster.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Kogan technique of user interface elements clustering into Masahiro technique to provide the known and expected uses and benefits of Kogan technique over storing operation information as a log technique of Masahiro. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Kogan to Masahiro in order to accurately detects errors or defects in application under test. (Refer to Kogan paragraph [0001].) However Masahiro as modified by Kogan does not explicitly disclose the following which would have been obvious in view of Chiu form similar felid of endeavor “ and a generation unit, implemented using one or more computing devices, configured to generate an operation log by using an image corresponding to the operation event of each classified cluster.” (Chiu, ¶[0008] discloses a log classification device configured to adaptively cluster a plurality of activities records collected from a target network system. The plurality of activities records are respectively generated by a plurality of device activity reporting programs stored in a plurality of computing devices in the target network system, according to command lines received by the plurality of computing devices. The communication circuit is configured to receive the plurality of activities records through a network. The storage circuit can store a data analysis program. The control circuit couples the communication circuit and the storage circuit, and is configured to execute the data analysis program to generate a discrete space metric tree according to the plurality of activities records and perform a clustering operation on the discrete space metric tree to generate one or more event clusters associated with one or more suspicious event categories. The output device is configured to output the one or more event clusters and allow an information security incident diagnosis system to calculate similar feature information and differential feature information of a plurality of activities records in the one or more event clusters as auxiliary information for diagnosing whether there are intrusions or abnormalities in the target network system. Further see ¶[0041]. ) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Chiu technique of event visualization into Masahiro as modified by Kogan technique to provide the known and expected uses and benefits of Chiu technique over storing operation information as a log technique of Masahiro as modified by Kogan. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Chiu to Masahiro as modified by Kogan in order to provide high efficiency data analysis and structured data presentation. (Refer to Chiu paragraph [0002].) As per claim 2, in view of claim 1, wherein the acquisition unit is configured to, “based on the captured image before and after the occurrence of the detected operation event being changed, acquire the occurrence position of the operation event.”(Masahiro, ¶[0077] discloses it is possible to select from the log table 120 the row selected by the check box of reproduction 211, and the log reproduction unit 113 acquires a capture image C0 and an enlarged capture image C1 acquired immediately before reproducing the operation Op, compares these capture images with the capture image (P0) and enlarged capture image (P1) acquired by the log collection unit 110, and further compares the character string Wd acquired by the log collection unit 110 with the character strings W0 and W1 acquired by the log reproduction unit 113, thereby improving the accuracy of verification. ) As per claim 3, in view of claim 1, “wherein the classifying unit is configured to classify the images based on at least one of the similarities between the images or similarities of a display position of each image in the captured image.” (Kogan, ¶[0056-0058] disclose as a result of the above approaches, a collection of screen images of an application under test are obtained, grouped by categories found by the clustering technique. For example, if the screen image includes menu UI elements 616, one of the clusters (first cluster 622) may contain the images of all menu UI elements 616 used in the application. Such clustering is possible without needing to identify that the specific UI elements happen to be menus in particular. Rather, the clustering can detect that those UI elements (which happen to be menu UI elements 616) have the same look and feel, and put them in the same group/cluster. In the example of FIG. 6, UI elements in one application were identified as belonging to the same category for the first cluster 622, and also for the second cluster 624. It is evident that the look and feel of UI elements in a given cluster are similar and belong to the same category, but that UI elements of the first cluster 622 have a different look and feel of the UI elements of the second cluster 624.) Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Masahiro (JP2019106093A), in view of Kogan et al. (US 2018/0174288), in view of Chiu et al. (US 2023/0036609), further in view of Moran et al. "Machine learning-based prototyping of graphical user interfaces for mobile apps." IEEE transactions on software engineering (2018). As per claim 4, in view of claim 1, However Masahiro as modified by Kogan as modified by Chiu does not explicitly disclose the following which would have been obvious in view of Moran form similar field of endeavor “wherein the specifying unit is configured to extract a candidate for the GUI component by cropping a first image of a predetermined format and a second image around the first image from the captured image.”(Moran, Page 8, Col. 2, section 3.1.3, second paragraph discloses the GUI-component detection process is a set of bounding box coordinates situated within the original input screenshot and a collection of images cropped from the original screenshot according to the derived bounding boxes that depict atomic GUI-components.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Moran technique of developing graphical user interfaces into Masahiro as modified by Kogan as modified by Chiu technique to provide the known and expected uses and benefits of Moran technique over storing operation information as a log technique of Masahiro as modified by Kogan as modified by Chiu. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Moran to Masahiro as modified by Kogan as modified by Chiu in order to provide user-facing software applications that are GUI-centric, to attract customers, facilitate the effective completion of computing tasks, and engage users. (Moran, Col. 1 introduction.) Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAGHAYEGH AZIMA/Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Aug 07, 2023
Application Filed
Sep 17, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586350
DETERMINING AUDIO AND VIDEO REPRESENTATIONS USING SELF-SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573209
ROBUST INTERSECTION RIGHT-OF-WAY DETECTION USING ADDITIONAL FRAMES OF REFERENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12561989
VEHICLE LOCALIZATION BASED ON LANE TEMPLATES
2y 5m to grant Granted Feb 24, 2026
Patent 12530867
Action Recognition System
2y 5m to grant Granted Jan 20, 2026
Patent 12525049
PERSON RE-IDENTIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month