Prosecution Insights
Last updated: April 19, 2026
Application No. 18/551,267

Identifying a Place of Interest on a Physical Object Through its 3D Model in Augmented Reality View

Non-Final OA §103
Filed
Sep 19, 2023
Examiner
GOOD JOHNSON, MOTILEWA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Siemens Aktiengesellschaft
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
608 granted / 831 resolved
+11.2% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
866
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 831 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/14/2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cote et al., U.S. Patent Number 10,395,427 B1, in view of Newman et al., U.S. Patent Number 11,386,611 B2. Regarding claim 1, Cote discloses a method for identifying at least one place of interest on a physical object within a physical environment using augmented reality and a 3D model of the physical object, the method comprising: aligning, in an augmented reality view, the 3D model with the physical object within the physical environment (col. 6, lines 12-13, the augmented reality application aligns the 3-D model with the scene; figure 3); determining motion data by tracking at least one gesture of a user related to the physical object (col. 5, lines 32-34, a sensor detecting hand gestures; configured to receive selections in the augmented reality view); identifying the at least one place of interest by determining at least one intersection point of said at least one gesture and the physical object (col. 6, lines 42-52, user selects the object of interest in the displayed augmented scene using an input device; selection may be made by moving a cursor over the object, in this example over the heat exchanger; using the input device to make a selection); wherein said at least one intersection point is determined by using at least one virtual proximity sensor comprised by the 3D model in relation to the motion data (col. 6, lines 47-48, selection may made by moving a cursor 460; col. 7, lines 60-65, path may be arrow overlaid upon the scene). However, it is noted that Cote fails to disclose making physical contact with the physical object. Newman discloses aligning, in an augmented reality view, the 3D model with the physical object within the physical environment (col. 10, lines 59-60, 3D model virtual graphics co-aligned with the physical object); determining motion data by tracking at least one gesture of a user making physical contact with the physical object (col. 9, lines 52-54, user is prompted to touch the respective point, where he see the equivalent anchor point on the physical object; col. 14, lines 37-39, tracker pointer can be used to indicate 3D position in space-for example touch an anchor point and generate a signal once touching a surface); and identifying the at least one place of interest by determining at least one intersection point of said at least one gesture and the physical object (col. 10, lines 9-11, each recorded “touch” defines a line from the recorded camera’s absolute position toward the anchor point in the physical world; ); wherein said at least one intersection point is determined by using at least one virtual proximity sensor comprised by the 3D model in relation to the motion data (col. 10, lines 20-30, the two closest points on the skew lines are calculated and the point that is at the center of the line connecting these two closest points is defined as the approximation of the anchor point actual position in the physical space; col. 14, lines 54-50, place a virtual camera in the same relative position to the virtual object as the projector lens position is relative to the physical object position; align the two as the virtual object and the physical object). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the user selection of a physical object as disclosed by Cote, determining a user touching a surface or physical object as disclosed by Newman, to track and position virtual object in the same position relate to the physical object enabling proper alignment in the same world coordinates. Regarding claim 2, Cote discloses further comprising: adapting a transparency of the 3D model; and/or acquiring image documentation related to said at least one place of interest (col. 6, lines 53-65, determine the line or symbol of the functional drawing that corresponds to the 3-D element; displays on the display of the electronic device a version of the functional drawing in which the corresponding line or symbol is highlighted; the highlighting may take a number of different forms, including a change to color, texture, intensity or other visual quality on or about the line or symbol that distinguishes it from the rest of the functional drawing). Regarding claim 3, Cote discloses further comprising storing the at least one place of interest with the 3D model of the physical object (col. 9, lines 64-67, augmented reality application saves the updated version of the 3-D model including the added 3-D element(s) to memory). Regarding claim 4, Cote discloses further comprising: deleting the at least one place of interest related to the 3D model; and/or modifying the at least one place of interest related to the 3D model; wherein deleting and the modifying are performed by including second motion data determined by tracking at least one second gesture of a user related to the physical object. (col. 8, lines 14-30, augments the scene based on the aligned 3-D model, highlighting the object in the physical environment that is represented by the user-selected symbol or line in the functional drawing; the pressure valve is shown highlighted as long as the user is within the threshold distance). Regarding claim 5, Cote discloses wherein determining of the motion data includes using a 3D-depth sensor in the AR-device (col. 7, lines 36-41, a distance is calculated between the approximate location of the electronic device and the location of the object; if the distance exceeds a threshold, execution proceeds to steps 540-560, where the augmented reality application generates a path in an augmented scene). Regarding claim 6, Cote discloses further comprising: determining a position of the physical object within the physical environment using a spatial awareness technology, before aligning the 3D model with the physical object within the physical environment; and after identifying the at least one place of interest, storing coordinates of the position of said at least one place of interest of the physical object with a Simultaneous Localization and Mapping (SLAM) map (col. 5, lines 11-19, determination system (e.g., Wi-Fi positioning system client, global positioning system (GPS) client, a tag-based positioning system, etc.) comprising hardware and/or software configured to determine an approximate location of the electronic device, and a precise position determination system comprising hardware and/or software configured to determine a pose of the electronic device; ) Regarding claim 7, Cote discloses further comprising categorizing the at least one place of interest into a predefined category by including third motion data determined by tracking at least one third gesture of a user related to the physical object (figure 8A; col. 8, lines 63-67, user makes a modification to the process flow of the functional drawing involving the one or more user-selected lines or symbols using an input device; example functional drawing including added symbols; col. 9, lines 15-18, user may select properties from a catalog or menu; augments the scene adding graphics; col. 9, lines 34-69, user enters input to move (e.g., drags) the added 3-D element(s) in the displayed augmented scene to a new location and orientation). Regarding claim 8, Cote discloses further comprising labelling the at least one place of interest using an identifier (col. 6, lines 54-57, the modeling application defines correspondence may between each element of the 3-D model and a line or symbol of the functional drawing (e.g., via use of a common identifier, pointer, etc.). Regarding claim 9, Cote discloses wherein the identifier comprises an information regarding: a mistake been made, an error, a faulty behaviour, an expectancy, a need for maintenance, a need for inspection; and/or a need for quality assurance, related to the physical object and/or said at least one place of interest (col. 1, line 61, worker generally still needs to inspect; col. 6, lines 33-41, the augmented scene shows the physical environment, including an object of interest, in this example, the heat exchanger; also shows overlaid graphics, in this example, graphics based on an alarm). Regarding claim 10, Cote discloses further comprising adding instructional data to said at least one place of interest, the instructional data comprising part of the 3D model (col. 9, lines 15-19, user may select properties from a catalog or menu; augmented reality application augments the scene adding graphics for the one or more 3-D elements; displaying the added 3-D element(s) on the display). Regarding claim 11, Cote discloses wherein the instructional data include at least one datum selected from the group consisting of: a work instruction, a working step, a step to be performed, a guidance information, a visual instruction, training information, an instruction for performing an inspection, an instruction for performing quality assurance, an instruction for performing technical service, and/or an instruction for performing a test (col.1, lines 43-53, to perform maintenance and design tasks; to repair a piece of malfunctioning equipment). Regarding claim 12, Cote discloses wherein the physical environment includes at least one environment selected from the group consisting of: an industrial environment, an industrial plant, a production plant, an energy plant, a building environment, a hospital environment, and/or a technical environment (col. 1, lines 35-37, may be closely located in the physical plant). Regarding claim 13, Cote discloses wherein the 3D model is arranged as: a 3D outline, comprising lines and points, or a 3D volume, comprising lines, points, and surfaces (figures 7-8C). Regarding claim 14, it is rejected based upon similar rational as above claim 1. Cote further discloses a non-transitory memory storing a set of instructions which, when the program is executed by the computational device, cause the computational device to carry a method (col. 10, lines 9-20). Response to Arguments Applicant’s arguments, see RCE, filed 01/14/2026, with respect to the rejection(s) of claim(s) 1-14 under 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 103, Cote in view of Newman. Applicant argues the prior art cited, Cote, fails to disclose the claimed invention. Applicant argues Cote fails to disclose a virtual proximity sensor and “determining motion data by tracking at least one gesture of a user making physical contact with the physical object”. Examiner responds Newman discloses col. 10, lines 20-30, the two closest points on the skew lines are calculated and the point that is at the center of the line connecting these two closest points is defined as the approximation of the anchor point actual position in the physical space; col. 14, lines 54-50, place a virtual camera in the same relative position to the virtual object as the projector lens position is relative to the physical object position; align the two as the virtual object and the physical object. Examiner therefore responds that the selection and gesture as detect are determined using a virtual proximity sensor of the 3D model in relation to motion, in that the gesture moves the virtual object that is positioned relative to the physical object by detected the approximate position of the gesture or hand tracked in 3D space. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 10,139,918 B2 Bedikian et al. Bedikian discloses figure 6 and also col. 24, lines 50-53, a “virtual control construct” as user herein with reference to an embodiment denotes a geometric locus defined (programmatically) in space and using in conjunction with a control object; and col. 30, lines 27-40, engagement targets defined by one or more virtual point constructs or virtual line (i.e., linear or curvilinear) constructs can be mapped onto engagement targets defined as virtual surface constructs, in the sense that the different mathematical descriptions are functionally equivalent. For example, a virtual point construct may correspond to the point of a virtual surface construct that is pierced by the control object (and a virtual line construct may correspond to a line in the virtual surface construct going through the virtual point construct). If the virtual point construct is defined on a line projecting the control object tip onto the screen, control object motions perpendicular to that line move the virtual point construct in a plane parallel; see also figure 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MOTILEWA . GOOD JOHNSON Primary Examiner Art Unit 2616 /MOTILEWA GOOD-JOHNSON/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Sep 19, 2023
Application Filed
May 05, 2025
Non-Final Rejection — §103
Jul 14, 2025
Response Filed
Sep 15, 2025
Final Rejection — §103
Oct 21, 2025
Response after Non-Final Action
Jan 14, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Apr 01, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602107
SYSTEM AND METHOD FOR DETERMINING USER INTERACTIONS WITH VISUAL CONTENT PRESENTED IN A MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602884
DISPLAY SYSTEM AND DISPLAY METHOD FOR AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597218
EXTENDED REALITY (XR) MODELING OF NETWORK USER DEVICES VIA PEER DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12592047
Method and Apparatus for Interaction in Three-Dimensional Space, Storage Medium, and Electronic Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12573100
USER-DEFINED CONTEXTUAL SPACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+14.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 831 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month