Prosecution Insights
Last updated: April 19, 2026
Application No. 18/704,335

SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING GUIDING AR LANDMARKS FOR PERFORMING MAINTENANCE OPERATIONS

Non-Final OA §103
Filed
Apr 24, 2024
Examiner
CHIO, TAT CHI
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
B. G. Negev Technologies and Applications Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
610 granted / 836 resolved
+15.0% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
49 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 836 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7, 9, 11, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 2021/0271886 A1) in view of Ramani et al. (US 2021/0134065 A1). Consider claim 1, Zheng teaches a method for automatically generating ([0007]), comprising: a) training a Machine Learning (ML) model (The AI module 15 may use stored or learned object data to identify and detect objects seen in the videos and/or may identify the objects through comparison with keywords in the text data or physical motions of the expert seen in the video data. [0083]) to identify and classify: a.1) predetermined parts of said device ([0083]); a.2) tools for performing said maintenance operations by manipulating the state of said parts ([0083]); a.3) hands gestures and manual operations of a professional worker, while using said tools, during manipulation according to a workflow being a sequence of phases for carrying out said maintenance operations, each phase being a predetermined plurality of corresponding operations performed by said professional worker in any order, to be completed before moving to the next phase ([0085]); b) acquiring, by one or more video cameras, video segments of said sequence of maintenance phases ([0009]); c) processing said video segments by a processor and operating software and automatically generating, by said processor and operating software, using the trained model, an interaction file ([0056]) which is adapted to: c.1) encode, using a playable format, said workflow in the form of a collection of landmarks, manual operations, and the relation between them (Fig. 9); c.2) associate each landmark with a corresponding phase (Fig. 9); c.3) determine starting and ending landmarks for each phase (Fig. 9); c.4) determine transitions between completed phases and their corresponding consecutive phases ([0064]); d) playing said interaction file by a player, which is adapted to: d.1) generate graphical guiding visual signs and animations representing each of said phases and transitions ([0063]); d.2) generate audio guiding instructions to be played with corresponding visual signs and animations ([0071]). However, Zheng does not explicitly teach AR and e) add said graphical guiding visual signs and animations and the audio guiding instructions to an AR user interface, to be worn by said novice user. Ramani teaches AR ([0010]) and e) add said graphical guiding visual signs and animations and the audio guiding instructions to an AR user interface, to be worn by said novice user ([0010] – [0011], [0036], [0040], [0055], [0057], [0060] – [0069] and Fig. 20). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of implementing the method taught by Zheng using AR because such incorporation would improve the tutorial creation process of spatio-temporal tasks by leveraging the advantages of combining an object recognition system with virtual object rendering. [0026]. Consider claim 2, Zheng teaches the camera is a body camera attached to the forehead of the professional worker (Abstract, [0009], [0012], [0055]). Consider claim 3, Zheng teaches the processing of the video segments comprises: a) detecting and recognizing, the operations carried-out by the professional worker ([0056] – [0062]); b) recognizing and tracking the manipulated parts of the device ([0056] – [0062]); c) detecting the order of the operations carried-out by the professional worker and grouping together several operations into a sequence ([0056] – [0062]); d) detecting portions of the work scenes that are performed by the professional worker during each phase ([0056] – [0062]). Consider claim 4, the combination of Zheng and Ramani teaches integrating voice indications to the generated and/or to the played interaction file, for guiding the user via the AR interface ([0056] of Zheng; [0010] – [0011], [0036], [0040], [0055], [0057], [0060] – [0069] and Fig. 20 of Ramani). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of implementing the method taught by Zheng using AR because such incorporation would improve the tutorial creation process of spatio-temporal tasks by leveraging the advantages of combining an object recognition system with virtual object rendering. [0026]. Consider claim 5, Zheng teaches the operations within each phase are performed by the professional worker or by the novice user, according to any order ([0012] – [0015], [0020], [0055] – [0062]). Consider claim 6, Ramani teaches the AR user interface is a smart helmet or smart glasses with AR capability ([0010] – [0012], [0026] – [0028], [0057]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of implementing the method taught by Zheng using AR because such incorporation would improve the tutorial creation process of spatio-temporal tasks by leveraging the advantages of combining an object recognition system with virtual object rendering. [0026]. Consider claim 7, the combination of Zheng and Ramani teaches the interaction file is played ([0050] – [0059] of Ramani) according to the following steps: a) arranging the workflow according to progress in the workflow steps and termination points ([0077] – [0082] of Zheng); b) detecting the work scene and the target parts at each step ([0036] – [0039] of Ramani); c) drawing markers on the scene, to guide the novice user while carrying out the workflow operations ([0044] – [0049] of Ramani); and d) performing a validation process by the operating software of the AR interface device, to ensure that any operation was completed successfully according to the workflow, wherein progress in playing said interaction file is made according to said workflow and the transition from a phase to the next phase is done upon detecting that all the mandatory operations in a current phase have been completed by the user ([0050] – [0059] of Ramani). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of implementing the method taught by Zheng using AR because such incorporation would improve the tutorial creation process of spatio-temporal tasks by leveraging the advantages of combining an object recognition system with virtual object rendering. [0026]. Consider claim 9, the combination of Zheng and Ramani teaches integrating voice indications into the played AR, for guiding the user ([0056] of Zheng; [0010] – [0011], [0036], [0040], [0055], [0057], [0060] – [0069] and Fig. 20 of Ramani). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of implementing the method taught by Zheng using AR because such incorporation would improve the tutorial creation process of spatio-temporal tasks by leveraging the advantages of combining an object recognition system with virtual object rendering. [0026]. Consider claim 11, Zheng teaches the operations within each phase are performed by the professional worker or by the novice user according to any order ([0012] – [0015], [0020], [0055] – [0062]). Consider claim 15, claim 15 recites the system that performs the method recited in claim 1. Thus, it is rejected for the same reasons. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zheng et al. (US 2021/0271886 A1) in view of Ramani et al. (US 2021/0134065 A1) and Arshad et al. (US 2019/0354761 A1). Consider claim 8, the combination of Zheng and Ramani teaches all the limitations in claim 1 but does not explicitly teach the Interaction File is a JavaScript Object Notation (JSON) file or an Extensible Markup Language (XML) file. Arshad teaches the Interaction File is a JavaScript Object Notation (JSON) file or an Extensible Markup Language (XML) file ([0072]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating the interaction file in JavaScript Object Notation or the Extensible Markup Language format because such incorporation would allow presenting at least a subset of the information contained in the procedural workflow data to al least one user through a browser application. [0072]. Allowable Subject Matter Claims 12-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAT C CHIO/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Apr 24, 2024
Application Filed
Oct 30, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587653
Spatial Layer Rate Allocation
2y 5m to grant Granted Mar 24, 2026
Patent 12549764
THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12549845
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
2y 5m to grant Granted Feb 10, 2026
Patent 12546657
METHODS AND SYSTEMS FOR REMOTE MONITORING OF ELECTRICAL EQUIPMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12549710
MULTIPLE HYPOTHESIS PREDICTION WITH TEMPLATE MATCHING IN VIDEO CODING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+16.6%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 836 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month