Prosecution Insights
Last updated: April 19, 2026
Application No. 18/763,083

VIRTUAL REALITY CLINICAL IMMERSION

Non-Final OA §103§112
Filed
Jul 03, 2024
Examiner
SAJOUS, WESNER
Art Unit
2612
Tech Center
2600 — Communications
Assignee
VIRGINIA TECH INTELLECTUAL PROPERTIES, INC.
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
1099 granted / 1196 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
29 currently pending
Career history
1225
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
33.5%
-6.5% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1196 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 07/03/2024. Claims 1-20 are presented for examination. Claims 1, 12, and 17 are independent claims. Information Disclosure Statement 2. The information disclosure statements (IDSs) submitted on 10/03/2024 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner. Claim Rejections - 35 USC § 112 3. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claims 1, 12 and 17, the limitation reciting: “placing a graphical representation of a second class over each object visible in the respective 360-degree image that has particular educational content assigned to the object” (see step (b) part (iii) in the claims), renders the claim indefinite for lacking proper antecedent basis for the “each object visible”. The limitation appears to imply that a plurality of objects was made visible in the captured 360-degree images. However, no such detail can be read from the claimed limitations. As such, the limitation fails to limit the claims. The claims not specifically cited in this rejection are rejected as being dependent upon their rejected base claims. For examination purpose, the “each object visible” is interpreted as metadata or content extracted from the one or more 360-degree images. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over the NPL document to Fukuta et al., entitles: " Useof360°Video for a Virtual Operating Theatre Orientation for Medical Students” in view of Hur et al. (US 20200107008). Considering claim 1, Fukuta discloses a method comprising: (a) capturing, by one or more cameras, one or more 360-degree images of a medical environment (e.g., We utilized a 360˚ camera to generate a “virtual” 360˚ video orientation for a virtual operating theatre orientation for medical students; see page 391, “Methods” and “discussion” sections of Fukuta), wherein each of the one or more 360-degree images are captured with one of the one or more cameras being placed at a respective location (for example, Fukuta discloses that the video was filmed in first-person perspective to improve engagement and to make it more experiential. See page 391, “Method” section of Fukuta); (b) creating, by one or more processors, a virtual reality environment of the medical environment based on the one or more 360-degree images of the medical environment (e.g., The 360˚ cameras enable a full 360˚ view to be filmed in real time. The benefits allow both the depiction of an environment by placing the viewer “within” that environment but also with the use of smartphones allows the viewer to choose where to focus their attention by moving their own screen. This ability to see an environment as if the viewer is placed within it lends itself well to the concept of an orientation as it exposes the viewer to that environment in a virtual way. See page 391, “Introduction” section of Fukura, wherein the one or more processors are intrinsic to the cameras performing the 360 full view. See also the “Video Production” section at page 392), wherein creating the virtual reality environment comprises, for each of the one or more 360-degree images: (i) placing a user viewpoint at the respective location of the respective camera of the one or more cameras that captured the respective 360-degree image (e.g., Fukura, at page 392, the “video Production” section, discloses: To do this we altered a pre-existing head mount to hold our specific camera onto the head of our student actor. The other actors playing the roles of theatre staff were instructed to talk directly to the camera to recreate the feeling of the theatre orientation “physically” being carried out by the viewer); and (c) outputting, by the one or more processors and for interactive display at a virtual reality device, the virtual reality environment (for examples, Fukura discloses: 360˚ video captures a real environment more like one the students are likely to encounter and may have a place in medical education as it can impart knowledge of specific environments through an interactive experience. See “Discussion” section, page 393). In addition, Fukura, by describing utilizing a 360° camera to generate a “virtual” 360° video orientation in real environment (see “Methods” section) and utilizing a virtual environment of computer-generated images and avatars (see “Discussion” section), Fukura, therefore, discloses using one or more processors to control the 360 camera, as well as to create the virtual reality environment and for outputting a virtual reality device for interactive display, as claimed. Fukuda does not explicitly teach (ii) placing a graphical representation of a first class at each other respective location that is visible in the respective 360-degree image and (iii) placing a graphical representation of a second class over each object visible in the respective 360-degree image that has particular educational content assigned to the object. Nevertheless, Fukura, at page 392 and the “Limitations” section, discloses: “A possible future solution would be to embed captions or annotations into different parts of the video to expand on specific points”. This implies that certain improvements, based on design-choice, can be made to the virtual operating theatre orientation technique of Fukura, to cause the system’s administrator to place annotations, as a first class type, at respective theatre locations where the captured 360-degree images are made visible to assist the students in seeing the clinical environments; and to insert, as a second class type, embed captions over each respective 360-degree images as educational contents for the medical students, during the process of creating the virtual reality environment associated with the 360-degree images for the students/viewers. The benefit to modify the Fukura’s teachings as such would be to create a 360-degree viewing environment of an operating theatre for training of medical students and to make it easy for the students to seeing the physical space and meeting team members, including improving the experience of the students/viewers of the clinical environments and theatre orientation in a way which would feel realistic to the students. See the “limitations” section of Fukura. In the alternative, Hur, in a similar art, discloses an overlay method for a VR media service and a signaling method therefor, and an editor for authoring 360 video can place overlays on 360 video. See paras. 288-289. According to Hur, the method is configured to perform processing of extracting metadata and overlay location from the authoring 360 video and transmitted to an author’s input through the decapsulation processing unit and metadata parser which then encode, decode and extract video/image, text, and media data, with respect to the overlay location, and later transmitted to the overlay rendering, to facilitate the composition process of rendering and placing of the overlay of video/image, text, and media data at the respective locations over the 360 video, to thereby be outputted on the screen. See paras. 290-292. Additionally, Hur further describes that the composited overlay can be provided to a VR media service in the form of an overlay media track configuration about where and how overlay media and related data information is stored, overlay media packing information on how the overlay media is packed, overlay media projection information about whether the projection is applied to the overlay media, overlay media projection and packing information signaling, a method of linking overlay media tracks with VR media tracks, overlay rendering location/size information about when and where the overlay is to be located and how large the overlay should appear when VR media is played, overlay rendering attribute information about whether the overlay should be made to look transparent and how to blend the overlay, overlay Miscellaneous information about what other rendering functions of the overlay can be provided, overlay interaction information about whether interaction with the overlay is possible and if possible, in what range the interaction is possible, dynamic overlay metadata signaling, including a method of linking overlay metadata track with overlay media track, and a method of signaling overlay metadata on the overlay media track may be proposed. See paras. 293-294. Moreover, Hur teaches that the VR media service can transmits the overlay as a plurality of overlays to be placed on one image or a plurality of images, based on location information of each overlay in the image, and can be rendered and projected according to media types and sizes and/or resolutions according to importance of specified locations on the 360 media images. See paras. 303-309 and 314-323. Accordingly, it would have been obvious to one of the ordinary skilled in the art, before the effective filling date of the invention was made, to have modified the teachings of Fukura to include placing a graphical representation of a first class at each other respective location that is visible in the respective 360-degree image and placing a graphical representation of a second class over each object visible in the respective 360-degree image that has particular educational content assigned to the object, when creating the virtual reality environment associated with the 360-degree images; in the same conventional manner as taught by Hur. The motivation to combine would be to enable image processing of selected locations of the 360 images in which users are determined to be interested through the gaze analysis, ROI, and an area that is reproduced first when a user views the 360 video through the VR display. See para. 217 of Hur. As per claim 2, Fukura, as modified by Hur, discloses the one or more 360-degree images comprises a plurality of 360-degree images. See fig. 1 of Fukura in view of paras. 288-292 of Hur and the rationale above with respect to the rejections of claim 1 for reasons of obviousness. As per claim 3, Fukura fails to teach, but Hur teaches while outputting the virtual reality environment, placing, by the one or more processors, a current viewpoint at the user viewpoint of a first 360-degree image of the plurality of 360-degree images (for examples, Hur discloses 360 video data projected on a 360 video data having undergone a region-wise packing process may be partitioned into one or more tiles, wherein Region-wise packing may be processing each region of the 360 video data projected on the 2D image in order to improve coding efficiency or to adjust resolution. Tiling may be dividing, the data encoder, the projected frame or the packed frame into tiles and independently encoding the tiles. When the 360 video data are provided, the user does not simultaneously enjoy all parts of the 360 video data. Tiling may enable the reception side to enjoy or receive only tiles corresponding to an important part or a predetermined part, such as the viewport that is being viewed by the user, to the reception side within a limited bandwidth. See para. 208). Therefore, it would have been obvious to one of the ordinary skilled in the art, before the effective filling date of the invention was made, to have modified the teachings of Fukura to include while outputting the virtual reality environment, placing, by the one or more processors, a current viewpoint at the user viewpoint of a first 360-degree image of the plurality of 360-degree images, in the same conventional manner as taught by Hur. The motivation to combine would have been to enable the 360-content provider to produce a 360 video in consideration of the area of the 360 video in which users are expected to be interested. See para. 210 of Hur. As per claim 4, Fukura fails to teach, but Hur teaches (a) receiving, by the one or more processors, an indication of user input selecting a graphical representation visible from the current viewpoint within the virtual reality environment, wherein the graphical representation belongs to the first class of graphical representation; and (b) in response to receiving the indication of user input: (i) updating, by the one or more processors, the current viewpoint to be the user viewpoint for a second 360-degree image of the plurality of 360-degree images; and (ii) outputting, by the one or more processors and for interactive display at the virtual reality device, the virtual reality environment as depicted from the user viewpoint for the second 360-degree image. See paras. 215-221, 228-230, 290-292, 303-309 and 314-323. Particularly, Hur discloses the transmission-side feedback-processing unit may deliver the viewport information to the metadata-processing unit to deliver metadata for the viewport area to the internal elements of the 360 video transmission apparatus where processing is performed on the viewport area, based on a region of interest selected by a user when the user first views the 360 video through the VR display. See paras. 215-217 of Hur as basis for covering the teaching in part (a) of the claim. Regarding part (b) of the claim, Hur discloses The transmission-processing unit may perform transmission processing on the viewport area of the 360-degree-video related metadata, and the 360-degree video reception apparatus may provide the user with a predetermined area of the 360-degree video as an initial viewport using these three fields and the FOV information. [0229] In some embodiments, the initial viewport indicated by the initial-view related metadata may be changed for each scene. That is, the scenes of the 360-degree video may be changed over time of 360 content. An initial viewport or an initial viewport at which the user views the video first may be changed for every scene of the 360-degree video. In this case, the initial-view related metadata may indicate the initial viewport for each scene. To this end, the initial-view related metadata may further include a scene identifier identifying the scene to which the initial viewport is applied. In addition, the FOV may be changed for each scene. The initial-view related metadata may further include scene-wise FOV information indicating the FOV corresponding to the scene. See paras. 228-230. According to Hur, the method is configured to perform processing of extracting metadata and overlay location from the authoring 360 video and transmitted to an author’s input through the decapsulation processing unit and metadata parser which then encode, decode and extract video/image, text, and media data, with respect to the overlay location, and later transmitted to the overlay rendering, to facilitate the composition process of rendering and placing of the overlay of video/image, text, and media data at the respective locations over the 360 video, to thereby be outputted on the screen. See paras. 290-292. Moreover, Hur teaches that the VR media service can transmits the overlay as a plurality of overlays to be placed on one image or a plurality of images, based on location information of each overlay in the image, and can be rendered and projected according to media types and sizes and/or resolutions according to importance of specified locations on the 360 media images. See paras. 303-309 and 314-323. Accordingly, it would have been obvious to one of the ordinary skilled in the art, before the effective filling date of the invention was made, to combine the teachings of Fukura with Hur; in order to enable image processing of selected locations of the 360 images in which users are determined to be interested through the gaze analysis, ROI, and an area that is reproduced first when a user views the 360 video through the VR display. See para. 217 of Hur. The combination of Fukura and Hur would provide the additional benefit of causing the 360-content provider to produce a 360 video in consideration of the area of the 360 video in which users are expected to be interested. See para. 210 of Hur. As per claim 5, Fukura, at page 392 and the “Limitations” section, discloses: “A possible future solution would be to embed captions or annotations into different parts of the video to expand on specific points so that the 360° videos may have a place in medical education as it can impart knowledge of specific environments through an interactive experience.”. This implies that certain improvements, based on design-choice, can be made to the virtual operating theatre orientation technique of Fukura, to cause the system’s administrator to place annotations, as either a first class type or a second class type, embed captions over each respective 360-degree images as educational contents for the medical students, during the process of creating the virtual reality environment associated with the 360-degree images for the students/viewers. Fukura fails to teach, but Hur teaches (a) receiving an indication of user input selecting a graphical representation visible from the current viewpoint within the virtual reality environment that belongs to the second class of graphical representation; and (b) in response to receiving the indication of user input: (i) updating, by the one or more processors, the virtual reality environment to include at least a portion of the particular educational content assigned to the selected graphical representation; and (ii) outputting, by the one or more processors, the virtual reality environment with at least the portion of the particular educational content assigned to the selected graphical representation. See paras. 215-221, 228-230, 290-292, 303-309 and 314-323. Particularly, Hur discloses the transmission-side feedback-processing unit may deliver the viewport information to the metadata-processing unit to deliver metadata for the viewport area to the internal elements of the 360 video transmission apparatus where processing is performed on the viewport area, based on a region of interest selected by a user when the user first views the 360 video through the VR display. See paras. 215-217 of Hur in view of Fukura as basis for covering the teaching in part (a) of the claim. Regarding part (b) of the claim, Hur discloses The transmission-processing unit may perform transmission processing on the viewport area of the 360-degree-video related metadata, and the 360-degree video reception apparatus may provide the user with a predetermined area of the 360-degree video as an initial viewport using these three fields and the FOV information. [0229] In some embodiments, the initial viewport indicated by the initial-view related metadata may be changed for each scene. That is, the scenes of the 360-degree video may be changed over time of 360 content. An initial viewport or an initial viewport at which the user views the video first may be changed for every scene of the 360-degree video. In this case, the initial-view related metadata may indicate the initial viewport for each scene. To this end, the initial-view related metadata may further include a scene identifier identifying the scene to which the initial viewport is applied. In addition, the FOV may be changed for each scene. The initial-view related metadata may further include scene-wise FOV information indicating the FOV corresponding to the scene. See paras. 228-230. According to Hur, the method is configured to perform processing of extracting metadata and overlay location from the authoring 360 video and transmitted to an author’s input through the decapsulation processing unit and metadata parser which then encode, decode and extract video/image, text, and media data, with respect to the overlay location, and later transmitted to the overlay rendering, to facilitate the composition process of rendering and placing of the overlay of video/image, text, and media data at the respective locations over the 360 video, to thereby be outputted on the screen. See paras. 290-292. Moreover, Hur teaches that the VR media service can transmits the overlay as a plurality of overlays to be placed on one image or a plurality of images, based on location information of each overlay in the image, and can be rendered and projected according to media types and sizes and/or resolutions according to importance of specified locations on the 360 media images. See paras. 303-309 and 314-323. Accordingly, it would have been obvious to one of the ordinary skilled in the art, before the effective filling date of the invention was made, to combine the teachings of Fukura with Hur; in order to enable image processing of selected locations of the 360 images in which users are determined to be interested through the gaze analysis, ROI, and an area that is reproduced first when a user views the 360 video through the VR display. See para. 217 of Hur. The combination of Fukura and Hur would provide the additional benefit of causing the 360-content provider to produce a 360 video in consideration of the area of the 360 video in which users are expected to be interested. See para. 210 of Hur. As per claim 6, Fukura, as modified by Hur, discloses the educational content omprises one or more of: (i) a video; (ii) an image; (iii) textual content; and (iv) audio. See fig. 1 and the “Limitations” and “Discussion” sections of Fukura in view of paras. 288-292 of Hur, and the rationale above with respect to the rejections of claim 1 for reasons of obviousness As per claims 7 and 8, the additional feature of claim 7 could have been easily derived by a person skilled in the art from the feature of Fukura, considering that in Fukura a solution is provided to allow a user to embed captions or annotations into different parts of the video to expand on specific points so that the 360° videos may have a place in medical education as it can impart knowledge of specific environments through an interactive experience (see the “Limitations”” section of Fukura). Thus, depicting a first class of graphical representations as graphical discs, and another class as graphical starbursts are considered normal design actions that the skilled person in the art would do without an inventive step when presented with the teachings of Fukura. The benefit would be to provide a, interactive display facilitating a good viewing experience for the user. As per claim 9, Fukura discloses the medical environment is a room or space in a hospital, medical facility, or treatment facility. See fig. 1 and the “Limitations” and “Discussion” sections of Fukura and the rationale above with respect to the rejections of claim 1 for reasons of obviousness As per claim 10, Fukura discloses the medical environment comprises one or more of: (i) an operating room; (ii) an emergency room; (iii) a clinical room; (iv) a scrubbing room; (v) a treatment room; (vi) a nursing station; (vii) a trauma unit; and (vii) an endoscopy unit. See fig. 1 and the “Limitations” and “Discussion” sections of Fukura. and the rationale above with respect to the rejections of claim 1 for reasons of obviousness As per claim 11, Fukura discloses each of the one or more 360-degree images comprise one or more of: (i) static images; and (ii) videos. See fig. 1 and the “Limitations” and “Discussion” sections of Fukura. and the rationale above with respect to the rejections of claim 1 for reasons of obviousness The invention of claim 12 recites features that correspond in scope with the limitations recited claim 1. As the limitations of claim 1 were found obvious over the combined teachings of Fukura and Hur, it is readily apparent that the applied prior arts perform the underlying elements. As such, the limitations of claim 12 are, therefore, subject to rejections under the same rationale as claim 1. In addition, Fukura, by describing utilizing a 360° camera to generate a “virtual” 360° video orientation in real environment (see “Methods” section) and utilizing a virtual environment of computer-generated images and avatars (see “Discussion” section), Fukura therefore discloses a computing device comprising: one or more processors to control the 360 camera, as well as to create the virtual reality environment and for outputting a virtual reality device for interactive display, as claimed. Claim 13 is rejected under the same rationale as claim 2. Claim 14 is rejected under the same rationale as claim 3. Claim 15 is rejected under the same rationale as claim 4. Claim 16 is rejected under the same rationale as claim 5. The subject-matter of independent claim 17 corresponds in terms of a non-transitory computer-readable medium to that of independent method claim 1, and the rationale raised above to reject the later also apply, mutatis mutandis, to the former. In addition, Fukura, by describing utilizing a 360° camera to generate a “virtual” 360° video orientation in real environment (see “Methods” section) and utilizing a virtual environment of computer-generated images and avatars, including the use of a software for future iterations (see “Discussion” section), Fukura, therefore, discloses using a computing device with one or more processors and a storage device for storing instructions to be executed by the one or more processors to control the 360 camera, as well as to create the virtual reality environment and for outputting a virtual reality device for interactive display, as claimed. Claim 18 is rejected under the same rationale as claim 2. Claim 19 is rejected under the same rationale as claim 3. Claim 20 is rejected under the same rationale as claim 5. Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dryer et al. (US 20240103678) discloses a technique for creating user interfaces for electronic devices, including user interfaces for navigating between and/or interacting with extended reality user interfaces. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571) 272-7791. The examiner can normally be reached on M-F 10:00 TO 7:30 (ET). Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice or email the Examiner directly at wesner.sajous@uspto.gov. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESNER SAJOUS/Primary Examiner, Art Unit 2612 WS 03/06/2026
Read full office action

Prosecution Timeline

Jul 03, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597177
Changing Display Rendering Modes based on Multiple Regions
2y 5m to grant Granted Apr 07, 2026
Patent 12597185
METHOD, APPARATUS, AND DEVICE FOR PROCESSING IMAGE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597203
SIMULATED CONSISTENCY CHECK FOR POINTS OF INTEREST ON THREE-DIMENSIONAL MAPS
2y 5m to grant Granted Apr 07, 2026
Patent 12589303
Computer-Implemented Methods for Generating Level of Detail Assets for Dynamic Rendering During a Videogame Session
2y 5m to grant Granted Mar 31, 2026
Patent 12592038
EDITABLE SEMANTIC MAP WITH VIRTUAL CAMERA FOR MOBILE ROBOT LEARNING
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+7.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1196 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month