Prosecution Insights
Last updated: April 19, 2026
Application No. 19/006,172

Controlling Multiple Views of an Image Data Set and User Interfaces for AR Headsets

Non-Final OA §103
Filed
Dec 30, 2024
Examiner
BELOUSOV, ANDREY
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Novarad Corporation
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
411 granted / 594 resolved
+14.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
33 currently pending
Career history
627
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
31.4%
-8.6% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 594 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to the filing of 3/12/26. Claims 1-35 are pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gibby (20190365498) in view of Hashimoto (2022/0284620) and in further view of Berliner (20220255995.) Claim 1, 14, 24: Gibby discloses a method for managing a first user view and a second user view of an image data set aligned with a body of a person using AR headsets, comprising: determining the first user view of the image data set (Fig. 4: 418, 420, 422; par. 28, 31) based in part on a first position of a first user in a 3D coordinate space defined using a first AR (par. 21, Fig. 1A: 112) headset (par. 23, when a doctor moves view point positions, the acquired medical image 114 can remain fixed in the correct spot with respect to the patient's anatomy and does not move around in the doctor's vision; par. 47, The AR headset can anchor or “pin” virtual images or objects into place with respect to the real environment or room. Once a virtual object or virtual image is locked in place for the viewable environment or real environment, then a user can move around the virtual object or virtual image to view the virtual object from different angles without the object or overlay image moving.) However, Gibby does not explicitly disclose: identifying a second user with a second AR headset at a second position in the 3D coordinate space; sending a first position of a first user in the 3D coordinate space to the second AR headset at the second position; and setting the second user view to the first user view of the image data set through the second AR headset to enable the second user view of the image date set to match the first user view. Hashimoto discloses a similar method for sharing images in an augmented reality, including: identifying a second user (Fig. 5: User B) with a second AR headset (Fig. 5: 1B HMD) at a second position in the 3D coordinate space (Fig. 5: F.sub.A; par. 69, vector F.sub.A is a vector representing a position of the other terminal 1B in the world coordinate system WA of the own terminal (the terminal 1A),); sending a first position of a first user in the 3D coordinate space (par. 68, The respective terminals 1 use information on coordinate values respecting positions of the terminals 1 in the world coordinate systems. In the example of FIG. 5, the coordinate value d.sub.A of the world coordinate system WA) to the second AR headset at the second position (par. 70, At the time of the coordinate system pairing, the terminal 1A measures the specific directional vector N.sub.A, the inter-terminal vector P.sub.BA, and the coordinate value d.sub.A as amount data 501 of the own terminal side, and transmits these amount data 501 to the terminal 1B.) Furthermore, Hashimoto discloses displaying images so that they match in the views of user A and B (Fig. 7: AR display example 2; par. 84, The image 22 at the position 21 is captured in a state where it faces the terminal 1 of the user on both the display surface 5A of the terminal 1A and the display surface 5B of the terminal 1B.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Gibby with Hashimoto based on a suggestion for sharing content in Gibby (par. 54, the live video may be recorded or otherwise streamed to another location.) Berliner discloses a similar method for sharing images in an augmented reality, including: setting the second user view to the first user view of the image data set through the second AR headset to enable the second user view of the image date set to match the first user view (par. 444, an additional user interaction changing an orientation of the three-dimensional virtual object for viewing from a particular perspective, and causing the at least one second wearable extended reality appliance to display the three-dimensional object from the particular perspective; par. 617, both wearable extended reality appliances 5811 and 5813 may present content 5815 at the same location and/or orientation.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Gibby with Berliner based on a suggestion for sharing content in Gibby (par. 54, the live video may be recorded or otherwise streamed to another location.) Claim 2, 25: Gibby Hashimoto and Berliner disclose the method as in claim 1, further comprising displaying at least a portion of the image data set using the second AR headset at the second position from a perspective that matches the first user view of the image data set (Gibby par. 23, when a doctor moves view point positions, the acquired medical image 114 can remain fixed in the correct spot with respect to the patient's anatomy and does not move around in the doctor's vision.) Claim 3: Gibby Hashimoto and Berliner disclose the method as in claim 1, further comprising allowing a first user to control the second user view of the image data set or navigation (Berliner par. 444, an additional user interaction changing an orientation of the three-dimensional virtual object for viewing from a particular perspective, and causing the at least one second wearable extended reality appliance to display the three-dimensional object from the particular perspective; par. 617, both wearable extended reality appliances 5811 and 5813 may present content 5815 at the same location and/or orientation.) Claim 4: Gibby Hashimoto and Berliner disclose the method as in claim 1, further comprising allowing a second user to control the first user view of the image data set (Berliner par. 445, receiving from the at least one second wearable extended reality appliance third data in response to a detection of a second user interaction with the virtual representation of the object … may detect the second user interaction with the virtual representation of the object (e.g., in a similar manner as the detection of the at least one user interaction associated with the object as described above, e.g. rotation (par. 444.)) Claim 5, 16, 27: Gibby Hashimoto and Berliner disclose the method as in claim 1, wherein data representing the first user view is sent to the second AR headset of the second user including at least one of: at least one projection slice viewed from a perspective of the first user, a location of the first user in the 3D coordinate space, a depth of at least one projection slice, a medical device location, object locations; or a pointer location (Berliner par. 444, The additional user interaction may include any action that may change the orientation of the three-dimensional virtual object, such as a rotate gesture, a drag gesture, a tap gesture (e.g., to activate a function to change an orientation), a spread gesture, a click (e.g., via a computer mouse or a touchpad), or any other action for changing an orientation. The particular perspective may include any desired angle from which the three-dimensional virtual object may be viewed.) Claim 6, 17, 28: Gibby Hashimoto and Berliner disclose the method as in claim 1, wherein the first user and the second user share user interface functions from their own perspective that are at least one of: altering a position of the image data set in the 3D coordinate system, moving to a different slice in the image data set, rotating a projection slice, and using graphical user interface (GUI) controls from their own perspective (Berliner par. 445, receiving from the at least one second wearable extended reality appliance third data in response to a detection of a second user interaction with the virtual representation of the object … may detect the second user interaction with the virtual representation of the object (e.g., in a similar manner as the detection of the at least one user interaction associated with the object as described above, e.g. rotation (par. 444 – performed by the first user.)) Claim 7, 18, 29: Gibby Hashimoto and Berliner disclose the method as in claim 1, wherein the second user performs functions from a perspective of the first user that are at least one of: altering a position of the image data set, dragging to a different slice in the image data set, rotating a projection slice, and using graphical user interface (GUI) controls from the perspective of the second user (Berliner par. 445, receiving from the at least one second wearable extended reality appliance third data in response to a detection of a second user interaction with the virtual representation of the object … may detect the second user interaction with the virtual representation of the object (e.g., in a similar manner as the detection of the at least one user interaction associated with the object as described above, e.g. rotation (par. 444 – performed by the first user.)) Claim 8, 19, 30: Gibby Hashimoto and Berliner disclose the method as in claim 1, wherein the first AR headset and the second AR headset are using a common 3D coordinate system in a location (Berliner par. 14, enable the plurality of wearable extended reality appliances to share content in a common coordinate system.) Claim 9, 20, 31: Gibby Hashimoto and Berliner disclose the method as in claim 1, further comprising switching to a panel having at least one alternative perspective view of the image data set as defined by the second user view and second user's position (Berliner par. 445, receiving from the at least one second wearable extended reality appliance third data in response to a detection of a second user interaction with the virtual representation of the object … may detect the second user interaction with the virtual representation of the object (e.g., in a similar manner as the detection of the at least one user interaction associated with the object as described above, e.g. rotation (par. 444 – performed by the first user.)) Claim 10, 21, 32: Gibby Hashimoto and Berliner disclose the method as in claim 1, wherein at least one navigational view is presented to the second user (Berliner par. 444, The additional user interaction may include any action that may change the orientation of the three-dimensional virtual object, such as a rotate gesture, a drag gesture, a tap gesture (e.g., to activate a function to change an orientation), a spread gesture, a click (e.g., via a computer mouse or a touchpad), or any other action for changing an orientation. The particular perspective may include any desired angle from which the three-dimensional virtual object may be viewed.) Claim 11, 22, 33: Gibby Hashimoto and Berliner disclose the method as in claim 10, wherein the at least one navigation view may be at least one of: a view of the image data set that is orthogonal to the second user view, a custom perspective set by the second user (Berliner par. 445, receiving from the at least one second wearable extended reality appliance third data in response to a detection of a second user interaction with the virtual representation of the object … may detect the second user interaction with the virtual representation of the object (e.g., in a similar manner as the detection of the at least one user interaction associated with the object as described above, e.g. rotation (par. 444)), a defined view of a medical guide, or a defined view that is locked to a body of a person. Claim 12, 23, 34: Gibby Hashimoto and Berliner disclose the method as in claim 10, wherein the at least one navigation view includes thumbnail views on a user interface bar viewable through the first AR headset or the second AR headset (Berliner Fig. 41, thumbnails (calendar, weather, email, etc.) around the edge of the screen.) Claim 13, 35: Gibby Hashimoto and Berliner disclose the method as in claim 1, wherein the at least a portion of the image data set is a slice from the image data set (Gibby par. 33, a medical professional may annotate a 2D slice or layer of an acquired medical image to create an augmentation tag 172.) Claim 15, 26: Gibby Hashimoto and Berliner disclose the system as in claim 14, further comprising switching to a control interface of a first user to enable viewing and control of the image data set presentation using the control interface of the first user (Berliner par. 157, In one example, one or more windows of a graphical user interface operating system may be presented on a virtual display. In another example, content presented on a virtual display may be interactive, that is, it may change in reaction to actions of users. In yet another example, a presentation of a virtual display may include a presentation of a screen frame, or may include no presentation of a screen frame; par. 174, input determination module 312 may regulate the operation of input interface 330 in order to receive pointer input 331, textual input 332, audio input 333, and XR-related input 334. Consistent with the present disclosure, input determination module 312 may concurrently receive different types of input data. Thereafter, input determination module 312 may further apply different rules based on the detected type of input. For example, a pointer input may have precedence over voice input.) Response to Arguments Applicant's arguments filed 3/12/2026 have been fully considered but they are not persuasive. Applicant argues that the combination of Gibby Hashimoto and Berliner does not disclose sending a first position of a first user in the 3D coordinate space to the second AR headset at the second position. The Examiner respectfully disagrees. As the Final Rejection of 9/18/25 cited, and now further expanded on in this office action, Hashimoto explicitly discloses, during position recognition sharing, the sending a first position of a first user in the 3D coordinate space (par. 68, The respective terminals 1 use information on coordinate values respecting positions of the terminals 1 in the world coordinate systems. In the example of FIG. 5, the coordinate value d.sub.A of the world coordinate system WA) to the second AR headset at the second position (par. 70, At the time of the coordinate system pairing, the terminal 1A measures the specific directional vector N.sub.A, the inter-terminal vector P.sub.BA, and the coordinate value d.sub.A as amount data 501 of the own terminal side, and transmits these amount data 501 to the terminal 1B.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gibby (20190348169) (par. 46: video of the view from the AR headset 108 may be captured by the AR headset 108 and then sent to a remote location, such as to … a remote AR headset or Virtual Reality (VR) headset for viewing by another user.) Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREY BELOUSOV whose telephone number is (571) 270-1695 and Andrew.belousov@uspto.gov email. The examiner can normally be reached Mon-Friday EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler, can be reached at telephone number 571-2740. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Andrey Belousov/ Primary Examiner Art Unit 2145 3/19/26
Read full office action

Prosecution Timeline

Dec 30, 2024
Application Filed
Mar 31, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103
Mar 12, 2026
Request for Continued Examination
Mar 18, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602533
CONTENT GENERATION WITH INTEGRATED AUTOFORMATTING IN WORD PROCESSORS THAT DEPLOY LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12585372
GRAPHICAL USER INTERFACE SYSTEM GUIDE MODULE
2y 5m to grant Granted Mar 24, 2026
Patent 12586829
SYSTEMS AND METHODS FOR GENERATING ROLL MAP AND MANUFACTURING BATTERY USING ROLL MAP
2y 5m to grant Granted Mar 24, 2026
Patent 12564733
METHODS FOR OPTIMIZING TREATMENT TIME AND PLAN QUALITY FOR RADIOTHERAPY
2y 5m to grant Granted Mar 03, 2026
Patent 12536210
AUTOMATED CONTENT CREATION AND CONTENT SERVICES FOR COLLABORATION PLATFORMS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 594 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month