Prosecution Insights
Last updated: April 17, 2026
Application No. 18/371,390

Enabling Multiple Virtual Reality Participants to See Each Other

Final Rejection §103§DP
Filed
Sep 21, 2023
Examiner
WU, MING HAN
Art Unit
2618
Tech Center
2600 — Communications
Assignee
unknown
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
282 granted / 370 resolved
+14.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
68.3%
+28.3% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§103 §DP
DETAILED ACTION In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 35 USC § 112 (f) Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Regarding to claim 6, claim limitations has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “inertial motion unit” coupled with functional languages, “to estimate” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 , sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Dependent claims are also interpreted under 35 U.S.C. 112 (f) interpretation due to the dependency of claim 6 and similar reason above. DOUBLE PATENTING The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1 – 8 of the current application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 11 of US Patent Application 12,120,287 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the current application claims are essentially covered by the limitations of the patent claims. Claims 9 – 12 of the current application are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of US Patent Application 12,120,287 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the current application claims are essentially covered by the limitations of the patent claims. Claim 13 of the current application are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of US Patent Application 12,120,287 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the limitations of the current application claims are essentially covered by the limitations of the patent claims. Also, shown below is a mapping between the limitations of independent claim 1 of current application U.S. Patent Application 17/313176 and independent claim 1 of U.S. Patent Application 12,120,287 B2. Claims Current Application Claims Patent Application 1 A system for viewing in a structure having a first participant and at least a second participant comprising: 1 An apparatus for viewing in a structure having a first participant and at least a second participant comprising: a first VR headset to be worn by the first participant, the first VR headset having an inertial motion unit, and at least a first camera; a first computer; a first hard-wired connection between the first computer and the first VR headset; a second VR headset to be worn by the second participant, the second VR headset having an inertial motion unit, and at least a second camera, each participant sees every other participant in the structure as every other participant physically appears in the structure in real time in a simulated world simultaneously displayed about them by the respective VR headset each participant is wearing, each participant sees the simulated world from their own correct perspectivein the structure; a network interface; a first inside-out tracked VR headset to be worn by the first participant, the first VR headset having an inertial motion unit, a first computer, and at least a first camera; a second inside-out tracked VR headset to be worn by the second participant, the second VR headset having an inertial motion unit, a second computer, and at least a second camera,each participant sees every other participant in4he structure as every other=participant physically appears in thestructure-in rea-time-in asimulatedworld displayed-about them by-the respective-inside= out traeked-VR-headset=eaeh-partiipant is-wearin-=eaeh-partieipant-seesthe-simulated=world=from their-o-wn correct-perspective-inthe structure; a network connection between the first computer and the network interface; a marker attached to the structure for the first and second VR headsets to determine locations of the first and second participants wearing the first and second VR headsets, respectively, in the structure and their own correct perspective in the structure; and coloring on at least a portion of the structure so the portion of the structure with coloring does not appear in the simulated world. a marker attached to the structure for the first and second VR headsets to determine locations of the first and second participants wearing the first and second VR headsets, respectively, in the structure and their own correct perspective in the structure; and coloring on at least a portion of the structure so the portion of the structure with coloring does not appear in the simulated world, each participant sees every other participant in the structure as every other participant physically appears in the structure in real time in a simulated worlddisplayed about them by the respective inside-out tracked VR headset each participant is wearing, each participant sees the simulated world from their own correct perspective in the structure using only the marker, the coloring and their own VR headset they are wearing. 9 A method for viewing in a structure having a first participant and at least a second participant comprising the steps of:sending from a first VR headset on a first participant via a first wired connection to a first computer, associated with the first participant, position and orientation of the first VR headset; sending from a second VR headset on a second participant via a second wired connection to a second computer, associated with the second participant, position and orientation of the second VR headset; sending left/right image pairs from a first stereo color camera of the first VR headset via the first wired connection to the first computer; 13 A method for a first participant and at least a second participant viewing in a structure comprising the steps of: the first participant and the second participant walking around together in a virtual world shown to the first participant by a first inside-out tracked VR headset worn by the first participant and shown to the second participant by a second inside-out tracked VR headset worn by the second participant, the virtual world is an entire world around them that is simulated and displayed in each inside-out tracked VR headset, the first participant and the second participant are in physical sight of each other in the structure and see each other in the structure in the virtual world while viewing the virtual world; and the first participant and the second participant seeing the virtual world from their own correct perspective in the structure, sending left/right image pairs from a second stereo color camera of the second VR headset via the second wired connection to the first computer; compositing by the first computer the left/right image pairs from the first stereo color camera over a rendered virtual reality scene wherever pixels of the left/right image pairs from the first stereo color camera are a predesignated color to create first resulting composite images; compositing by the second computer the left/right image pairs from the second stereo color camera over the rendered virtual reality scene wherever pixels of the left/right image 3 pairs from the second stereo color camera are the predesignated color to create second resulting composite images; sending from the first computer to the first VR headset the first resulting composite images via the first wired connection to be displayed in the first VR headset; and sending from the second computer to the second VR headset the second resulting composite images via the second wired connection to be displayed in the second VR headset. the first VR headset having an inertial motion unit, a first computer, and at least a first camera, the second VR headset having an inertial motion unit, a second computer, and at least a second camera, there is a marker attached to the structure for the first and second VR headsets to determine locations of the first and second participants wearing the first and second VR headsets, respectively, in the structure and their own correct perspective in the structure, there is coloring on at least a portion of the structure so the portion of the structure with coloring does not appear in the simulated world, each participant sees every other participant in the structure as every other participant physically appears in the structure in real time in a simulated world displayed about them by the respective inside-out tracked VR headset eachAmdt. dated May 22, 2024 Reply to Office action of February 22, 2024 participant is wearing, each participant sees the simulated world from their own correct perspective in the structure using only the marker, the coloring and their own VR headset they are wearing. 13 A method for viewing in a structure having a first participant and at least a second participant comprising the steps of:streaming view - independent scene data to each computer of a plurality of computers of the first and second participants; determining by each VR headset of a plurality of headsets each VR headset's own position and orientation via inside - out tracking; sending position and orientation of each VR headset via a wired data connection to each participants computer; each computer using the position and orientation and view independent scene data to render left and right eye views of a virtual scene; sending via the wired connection to the computer of each participant, left/right image pairs from a stereo color camera of each VR headset of each participant; each computer compositing the left/right image pairs over a rendered scene forever camera pixels are green; and sending resulting composite images from each computer to each associated VR headset via the wired data connection to be displayed in the associated VR headset.5 13 A method for a first participant and at least a second participant viewing in a structure comprising the steps of: the first participant and the second participant walking around together in a virtual world shown to the first participant by a first inside-out tracked VR headset worn by the first participant and shown to the second participant by a second inside-out tracked VR headset worn by the second participant, the virtual world is an entire world around them that is simulated and displayed in each inside-out tracked VR headset, the first participant and the second participant are in physical sight of each other in the structure and see each other in the structure in the virtual world while viewing the virtual world; and the first participant and the second participant seeing the virtual world from their own correct perspective in the structure, the first VR headset having an inertial motion unit, a first computer, and at least a first camera, the second VR headset having an inertial motion unit, a second computer, and at least a second camera, there is a marker attached to the structure for the first and second VR headsets to determine locations of the first and second participants wearing the first and second VR headsets, respectively, in the structure and their own correct perspective in the structure, there is coloring on at least a portion of the structure so the portion of the structure with coloring does not appear in the simulated world, each participant sees every other participant in the structure as every other participant physically appears in the structure in real time in a simulated world displayed about them by the respective inside-out tracked VR headset eachAmdt. dated May 22, 2024 Reply to Office action of February 22, 2024 participant is wearing, each participant sees the simulated world from their own correct perspective in the structure using only the marker, the coloring and their own VR headset they are wearing. US Patent 12,120,287 B2 does not explicitly disclose the limitations of claims 1 and 9: “computer which is adapted to be plugged into a wall outlet for power”. However, the features are well known in the art, as evidenced by Schmidt. In the same field of endeavor, Schmidt discloses: computer which is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.) Therefore, it would have been obvious to one of ordinary skill Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with computer which is adapted to be plugged into a wall outlet for power as taught by Schmidt. The motivation for doing is to easily generated imagery. US Patent 12,120,287 B2 does not explicitly disclose the limitations of claim 13: “each computer is adapted to be plugged into a wall outlet for power;”. However, the features are well known in the art, as evidenced by Schmidt. In the same field of endeavor, Schmidt discloses: each computer is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.) Therefore, it would have been obvious to one of ordinary skill Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with each computer is adapted to be plugged into a wall outlet for power as taught by Schmidt. The motivation for doing is to easily generated imagery. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 – 13 are rejected under 35 U.S.C. 103 as being unpatentable over Perlin1 et al. (Publication: US 2020/0241296 A1) in view of Perlin2 et al. (Publication: US 2008/0024598 A1) and Schmidt et al. (Patent: US 11,158,073 B2). Regarding claim 1, Perlin1 discloses system for viewing in a structure having a first participant and at least a second participant comprising ([0022] - FIG. 6. The system 10 may include a computer processor associated with each headset 14 which performs a sensor 22 fusion computation from the infrared tracking data captured by the infrared camera 28 and an IMU 8. The computer processor may use the infrared tracking data captured by the infrared camera 28 and the IMU 8 to generate the display image 20 on the display screen 18 of each headset 14 as well as correct spatial audio so that the co-located participants 111 both see and hear the display image 20 from the co-located participants' position and orientation in the auditorium 12. The computer microprocessor 27 may produce a silhouette in the display image 20 of a co-located participant. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second participant”): a first VR headset to be worn by the first participant, the first VR headset having an inertial motion unit, and at least a first camera ([0022] - The system 10 may include a computer processor associated with each headset 14 which performs a sensor 22 fusion computation from the infrared tracking data captured by the infrared camera 28 and an IMU 8 “inertial motion unit”. [0056] The computer processor can be physically contained within the headset 14, but does not need to be. In one embodiment the processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. “multiple systems worn by first or second participants. ” [0088] - headset 14 contains two cameras, one for each eye. [0065] - mixed reality users who can be supported using this tracking technique wear headset to create the visual illusion for every user that are users all inhabiting the same virtual space which is mapped onto their shared physical space in a way that is consistent across all users, “multiple systems worn by first or second participants, first and second cameras and computers…”.); a first computer ([0056] The computer processor can be physically contained within the headset 14, but does not need to be. In one embodiment the processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8.); a first hard-wired connection between the first computer and the first VR headset ([0065] - mixed reality users who can be supported using this tracking technique wear headset to create the visual illusion for every user that are users all inhabiting the same virtual space which is mapped onto their shared physical space in a way that is consistent across all users, “multiple systems worn by first or second participants, first and second cameras and computers…”. [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device.); a second VR headset to be worn by the second participant, the second VR headset having an inertial motion unit, and at least a second camera, each participant sees every other participant in the structure as every other participant physically appears in the structure in real time in a simulated world simultaneously displayed about them by the respective VR headset each participant is wearing, each participant sees the simulated world from their own correct perspective in the structure ( [0022] - The system 10 may include a computer processor associated with each headset 14 which performs a sensor 22 fusion computation from the infrared tracking data captured by the infrared camera 28 and an IMU 8 “inertial motion unit”. [0056] The computer processor can be physically contained within the headset 14, but does not need to be. In one embodiment the processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. “multiple systems worn by first or second participants. ” [0088] - headset 14 contains two cameras, one for each eye. [0065] - mixed reality users who can be supported using this tracking technique wear headset to create the visual illusion for every user that are users all inhabiting the same virtual space which is mapped onto their shared physical space in a way that is consistent across all users [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0064], [0007] - an arbitrarily large number of headsets (each with its own IMU 8) are being tracked simultaneously, wherein the participants in the auditorium all experience a same shared immersive computer animated and spatial audio cinematic content “in real time in a simulated world simultaneously displayed”.); a network interface ([0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device.); a network connection between the first computer and the network interface ([0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device.); a marker attached to the structure for the first and second VR headsets to determine locations of the first and second participants wearing the first and second VR headsets, respectively, in the structure and their own correct perspective in the structure ([0006], [0022] - The headset comprises an infrared camera mounted on the housing. The headset comprises a microprocessor having an internal IMU mounted to the housing which processes a location image captured by the camera to determine a position and orientation of the participant headset which is used to generate the display image on the display. Infrared “marker” [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”.). Perlin1 does not however Perlin2 discloses coloring on at least a portion of the structure for the portion of the structure with coloring ( [0011] - the apparatus comprises a light blocking shutter disposed in front of the display screen forming a stripe pattern which lets through only 1/3 of each stripe of the image on the display screen during each of the at least three distinct phases. The apparatus comprises a computer connected to the display screen and the light blocking shutter which changes the phases so in each phase the stripe pattern is shifted laterally, which renders 2 3D scenes corresponding to the eyes of the observer, which produces a proper left/right orientation pattern for each of the three phases and which interleaves the left/right orientations into three successive time phases as red, green and blue. [0030] - There is the step of displaying on a display screen 12 stripes of the image in at least three distinct phases. There is the step of forming a stripe pattern which lets through only 1/3 of each stripe of the image on the display screen 12 during each of the at least three distinct phases with a light blocking shutter 14 disposed in front of the screen.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 with coloring on at least a portion of the structure for the portion of the structure with coloring as taught by Perlin2. The motivation for doing is to to give a true stereoscopic view of simulated objects,. Perlin1 in view of Perlin2 do not disclose, Schmidt discloses computer which is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.); coloring on at least a portion so the portion with coloring does not appear in the scene (column 6 line 45 - possibly an object such as a green screen 610 that is designed to be captured in a live scene recording in such a way that it is overlaid with computer-generated imagery thus green color does not appear in the scene.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with computer which is adapted to be plugged into a wall outlet for power; coloring on at least a portion so the portion with coloring does not appear in the scene as taught by Schmidt. The motivation for doing is to easily generated imagery. Regarding claim 2, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 1. Perlin1 discloses including a second computer and a second hard wired connection between the second computer and the second VR headset ( [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. [0065] - mixed reality users who can be supported using this tracking technique wear headset to create the visual illusion for every user that are users all inhabiting the same virtual space which is mapped onto their shared physical space in a way that is consistent across all users, “multiple systems worn by first or second participants, first and second cameras and computers…”.). Regarding claim 3, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 2. Perlin1 discloses wherein the network connection includes a third hard wired connection between the first computer and the network interface and a fourth hard wired connection between the second computer and the network interface ([0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0065] - mixed reality users who can be supported using this tracking technique wear headset to create the visual illusion for every user that are users all inhabiting the same virtual space which is mapped onto their shared physical space in a way that is consistent across all users, “multiple systems worn by first or second participants, first and second cameras and computers…”). Regarding claim 4, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 2. Perlin1 discloses content, in a form of time varying view-independent scene data ([0068] - a cluster of such LEDs 44, can be co-located with the pattern-emitting IR laser 38. During video capture frames for which such a burst is present, the target will appear to each headset's camera 28 as a bright rectangle. Various temporal patterns of such bursts can be used to trigger different software events. For example, as shown in FIG. 5, three bursts in a row at successive intervals of 500 msec (1) can signal all headsets to simultaneously begin showing the same previously recorded cinematic content. A sequence of periodic bursts at regular time intervals, where the duration of each interval can be three seconds in one embodiment (2), can then subsequently be used to maintain synchronization of this content, despite any variation in the rates of the computer clocks in each of the headsets, “different configuration of bursts has its own view-independent scene data” .). Perlin2 discloses three-dimensional scene data ([0008] - simulated three dimensional scenes.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view Perlin1 and Schmidt with three-dimensional scene data as taught by Perlin2. The motivation for doing is to to give a true stereoscopic view of simulated objects,. Regarding claim 5, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 4. Perlin1 discloses the content is either pre-stored on each of the first and second computers or, alternatively, simultaneously streamed to each of the first and second computers from a server via the third and fourth wired connections or, alternatively, simultaneously broadcast from the server to the first and second computers via a wireless network ([0051] Users sit down in their seats in the auditorium 12, and each user puts on a headset 14. All users then see and hear pre-recorded content). Regarding claim 6, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 4 including inertial motion unit, the first and second VR headsets Perlin1 disclose the inertial motion unit in the first and second VR headsets is used to perform ([0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14 to perform viewing, “first and second VR headset”.) display VR image ([0006], [0022] - The headset comprises an infrared camera mounted on the housing. The headset comprises a microprocessor having an internal IMU mounted to the housing which processes a location image captured by the camera to determine a position and orientation of the participant headset which is used to generate the display image on the display.) . Perlin2 discloses to estimate a rotation of the first and second participant's head, respectively, in both yaw and pitch from a moment in time when a stereo image pair is captured by the first and second cameras, respectively, to a later moment in time when a final composited scene is displayed on the devices( [0008] - present a single observer with an artifact-free autostereoscopic view of simulated or remotely transmitted three dimensional scenes. The observer should be able to move or rotate their head freely in three dimensions, while always perceiving proper stereo separation. [0074] - produce left/right images, to interleave them to create a red/green/blue composite, and to put the result into an on-screen frame buffer for display. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. ). respectively, the rotation is used to perform a two dimensional image shift - specifically, a horizontal shift based on a change in head yaw and a vertical shift based on a change in head pitch - on both left and right camera images of the first and second cameras before the left and right camera images of the first and second cameras are composited with the simulated world, so that appear in a correct direction with respect to the observing participant in a final composited and displayed stereo image in the simulated world of the observing participant ([0008] - present a single observer with an artifact-free autostereoscopic view of simulated or remotely transmitted three dimensional scenes. The observer should be able to move or rotate their head freely in three dimensions, while always perceiving proper stereo separation. [0082] A system based on this principle sends a small infrared light from the direction of a camera during only the even video fields. The difference image between the even and odd video fields will show only two glowing spots, locating the observer's left and right eyes, respectively. By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. The lateral shift between the respective eye spots in these two images is measured, to calculate the distance of each eye. [0074] - produce left/right images, to interleave them to create a red/green/blue composite, and to put the result into an on-screen frame buffer for display. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view Perlin1 and Schmidt with to estimate a rotation of the first and second participant's head, respectively, in both yaw and pitch from a moment in time when a stereo image pair is captured by the first and second cameras, respectively, to a later moment in time when a final composited scene is displayed on the devices; respectively, the rotation is used to perform a two dimensional image shift - specifically, a horizontal shift based on a change in head yaw and a vertical shift based on a change in head pitch - on both left and right camera images of the first and second cameras before the left and right camera images of the first and second cameras are composited with the simulated world, so that appear in a correct direction with respect to the observing participant in a final composited and displayed stereo image in the simulated world of the observing participant as taught by Perlin2. The motivation for doing is to to give a true stereoscopic view of simulated objects,. Schmidt discloses other participants and non-green objects in the structure (column 6 line 40 - in a specific live action capture system, cameras 606(1) and 606(2) capture the scene, , there might be other sensor(s) 608 that capture information from the live scene (e.g., infrared cameras, infrared sensors, motion capture (“mo-cap”) detectors, etc.). On the stage 604, there might be human actors, animal actors, inanimate objects, background objects, and possibly an object such as a green screen 610 that is designed to be captured in a live scene recording in such a way that it is easily overlaid with computer-generated imagery for display.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with other participants and non-green objects in the structure as taught by Schmidt. The motivation for doing is to easily generated imagery. Regarding claim 7, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 6. Perlin1 discloses including rows of chairs and the first and second participants are each positioned to sit in one of the chairs so the first and second participants see each other and share a consistent VR experience (As shown in Fig. 7, of chairs and the first and second participants are each positioned to sit in one of the chairs. [0020], [0069] The present invention pertains to a system 10 for a synchronized shared mixed reality for co-located participants 111 of an audience 56 wearing VR headsets , in a location such as an auditorium 12, as shown in FIGS. 1-4, 7 and 8.). Regarding claim 8, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 6. Perlin1 discloses including at least a first [[table and a first chair and a second chair]] positioned about the first table and the first and second participants sit at the first and second chairs, respectively, about the first [[table]] and share a consistent VR experience ((As shown in Fig. 7, of chairs and the first and second participants are each positioned to sit in one of the chairs. [0020], [0069] The present invention pertains to a system 10 for a synchronized shared mixed reality for co-located participants 111 of an audience 56 wearing VR headsets , in a location such as an auditorium 12, as shown in FIGS. 1-4, 7 and 8.). ) Perlin1 in view of Perlin2 disclose the following including first table and a first chair, a first table and a first chair and a second chair positioned about the first table and the first and second participants sit at the first and second chairs, respectively, about the first table and share a consistent (Perlin2 discloses [0091] This display platform can be used for teleconferencing. With a truly non-invasive stereoscopic display, two people having a video conversation can perceive the other as though looking across a table. Each person's image is transmitted to the other via a video camera that also captures depth, “share”. Perlin1 discloses Fig. 7, Fig. 8, [0051] Users sit down in their seats in the auditorium 12, and each user puts on a headset 14. All users then see and hear pre-recorded content with the sensory illusion that said content is being performed live by actors who are physically present in the room, such as would be the case for a live performance by actors on a stage). Regarding claim 9, Perlin1 discloses a method for viewing in a structure having a first participant and at least a second participant comprising the steps of ([0022] - FIG. 6. The system 10 may include a computer processor associated with each headset 14 which performs a sensor 22 fusion computation from the infrared tracking data captured by the infrared camera 28 and an IMU 8. The computer processor may use the infrared tracking data captured by the infrared camera 28 and the IMU 8 to generate the display image 20 on the display screen 18 of each headset 14 as well as correct spatial audio so that the co-located participants 111 both see and hear the display image 20 from the co-located participants' position and orientation in the auditorium 12. The computer microprocessor 27 may produce a silhouette in the display image 20 of a co-located participant. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second participant”): sending from a first VR headset on a first participant via a first wired connection to a first computer, associated with the first participant, position and orientation of the first VR headset ([0006], [0022] - The headset comprises an infrared camera mounted on the housing. The headset comprises a microprocessor having an internal IMU mounted to the housing which processes a location image captured by the camera to determine a position and orientation of the participant headset which is used to generate the display image on the display. [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”.); sending from a second VR headset on a second participant via a second wired connection to a second computer, associated with the second participant, position and orientation of the second VR headset ([0006], [0022] - The headset comprises an infrared camera mounted on the housing. The headset comprises a microprocessor having an internal IMU mounted to the housing which processes a location image captured by the camera to determine a position and orientation of the participant headset which is used to generate the display image on the display. [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”.); Perlin1 in view of Perlin2 disclose the following: sending left/right image pairs from a first stereo color camera of the first VR headset via the first wired connection to the first computer ( Perlin2 discloses [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. Perlin1 discloses [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”); sending left/right image pairs from a second stereo color camera of the second VR headset via the second wired connection to the first computer (Perlin2 discloses [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. Perlin1 discloses [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”[0049] - Each of the left/right displayed images on the display screen); compositing by the first computer the left/right image pairs from the first stereo color camera over a rendered scene wherever pixels of the left/right image pairs from the first stereo color camera are a color to create first resulting composite images ( Perlin2 discloses [0074] - to produce left/right images, to interleave them to create a red/green/blue composite “a color”. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. Perlin1 discloses [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14 . [0049] - Each of the left/right displayed images on the display screen); compositing by the second computer the left/right image pairs from the second stereo color camera over the rendered scene wherever pixels of the left/right image pairs from the second stereo color camera are the color to create second resulting composite images ( Perlin2 discloses [0074] - to produce left/right images, to interleave them to create a red/green/blue composite “a color”. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. Perlin1 discloses [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14 [0049] - Each of the left/right displayed images on the display screen); sending from the first computer to the first VR headset the first resulting composite images via the first wired connection to be displayed in the first VR headset (Perlin2 discloses [0074] - to produce left/right images, to interleave them to create a red/green/blue composite “. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots, “resulting composite images”. Perlin1 discloses [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14 “first, second VR headset”. [0049] - Each of the left/right displayed images on the display screen ); and sending from the second computer to the second VR headset the second resulting composite images via the second wired connection to be displayed in the second VR headset (Perlin2 discloses [0074] - to produce left/right images, to interleave them to create a red/green/blue composite “. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots, “resulting composite images”. Perlin1 discloses [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14 “first, second VR headset”. [0049] - Each of the left/right displayed images on the display screen). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 and Schmidt with sending left/right image pairs from a first stereo color camera of the first VR headset via the first wired connection to the first computer ;sending left/right image pairs from a second stereo color camera of the second VR headset via the second wired connection to the first computer; compositing by the first computer the left/right image pairs from the first stereo color camera over a rendered scene wherever pixels of the left/right image pairs from the first stereo color camera are a color to create first resulting composite images; compositing by the second computer the left/right image pairs from the second stereo color camera over the rendered scene wherever pixels of the left/right image pairs from the second stereo color camera are the color to create second resulting composite images; sending from the first computer to the first VR headset the first resulting composite images via the first wired connection to be displayed in the first VR headset; and sending from the second computer to the second VR headset the second resulting composite images via the second wired connection to be displayed in the second VR headset as taught by Schmidt. The motivation for doing is to easily generated imagery. Schmidt discloses computer which is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.); predesignated color (column 6 line 45 - possibly an object such as a green screen 610 that is designed to be captured in a live scene recording in such a way that it is overlaid with computer-generated imagery thus green color does not appear in the scene.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with computer which is adapted to be plugged into a wall outlet for power; coloring on at least a portion so the portion with coloring does not appear in the scene as taught by Schmidt. The motivation for doing is to easily generated imagery. Regarding claim 10, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 9. Perlin1 discloses using the first VR headset position and orientation and view independent scene data to render left and right eye views of the virtual scene for the first VR headset, and the second computer using the second VR headset position and orientation and view - independent scene data to render left and right eye views of the virtual scene for the second VR headset ([0088] - the headset 14 contains two cameras, one for each eye. Each camera 28 is positioned and oriented 55 to point toward the plate 9 so that its reflection is optically superimposed upon the corresponding eye 4 of the participant 111. Infrared light from the outside world will enter through each reflecting lens 29, then will be reflected upward from the corresponding plate toward the camera 28. The effect will be that the camera 28 will be optically superimposed upon the participant's eye window, and therefore that each camera 28 will see exactly what is seen by the corresponding eye of the participant, left and right eyes “render left and right eye views of the virtual scene”. [0055] The computer processor associated with this headset 14 does a sensor fusion computation from the IR pattern 39 captured by the camera 28 and the IMU 8 to compute the position and orientation of the participant, and this information is used to generate the proper image 20 on the display, as well as correct spatial audio, so that the participant both sees and hears the virtual scene from his/her correct position and orientation in the physical room. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”.). Regarding claim 11, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 10. Perlin1 discloses including the step of view - independent scene data to the first computer and the second computer ([0088] - that the camera 28 will be optically superimposed upon the participant's eye window, and therefore that each camera 28 will see exactly what is seen by the corresponding eye of the participant. [0051], [0069] - Users has their own VR headset 14 “independent”). Schmidt discloses step of streaming view (SUMMARY, column11 line 15 - digital data streams representing various types of information for diaply.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 and Schmidt with step of streaming view as taught by Schmidt. The motivation for doing is to easily generated imagery. Regarding claim 12, Perlin1 in view of Perlin 2 and Schmidt disclose all the limitation of claim 11. Perlin1 discloses including the step of the first VR headset determining the first VR headset's own position and orientation via inside - out tracking, and the second VR headset determining the second VR headset's own position and orientation via inside - out tracking ([0054] To perform inside out tracking, an IMU 8 (which can be the IMU 8 of the processor of a SmartPhone), and a forward-facing IR camera 28 are both contained within the headset 14. [0055] The computer processor associated with this headset 14 does a sensor fusion computation from the IR pattern 39 captured by the camera 28 and the IMU 8 to compute the position and orientation of the participant, and this information is used to generate the proper image 20 on the display, as well as correct spatial audio, so that the participant both sees and hears the virtual scene from his/her correct position and orientation in the physical room. [0051], [0069] - Users has their own VR headset 14 “first, second VR headset”). Regarding claim 13, Perlin1 discloses a method for viewing in a structure having a first participant and at least a second participant comprising the steps of ([0022] - FIG. 6. The system 10 may include a computer processor associated with each headset 14 which performs a sensor 22 fusion computation from the infrared tracking data captured by the infrared camera 28 and an IMU 8. The computer processor may use the infrared tracking data captured by the infrared camera 28 and the IMU 8 to generate the display image 20 on the display screen 18 of each headset 14 as well as correct spatial audio so that the co-located participants 111 both see and hear the display image 20 from the co-located participants' position and orientation in the auditorium 12. The computer microprocessor 27 may produce a silhouette in the display image 20 of a co-located participant. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second participant”): view - independent scene data to each computer of a plurality of computers of the first and second participants ( [0051], [0069] - Users has their own VR headset 14 “independent” [0055] The computer processor associated with this headset 14 does a sensor fusion computation from the IR pattern 39 captured by the camera 28 and the IMU 8 to compute the position and orientation of the participant, and this information is used to generate the proper image 20 on the display, as well as correct spatial audio, so that the participant both sees and hears the virtual scene from his/her correct position and orientation in the physical room. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second participant”); determining by each VR headset of a plurality of headsets each VR headset's own position and orientation via inside - out tracking ([0054] To perform inside out tracking, an IMU 8 (which can be the IMU 8 of the processor of a SmartPhone), and a forward-facing IR camera 28 are both contained within the headset 14. [0055] The computer processor associated with this headset 14 does a sensor fusion computation from the IR pattern 39 captured by the camera 28 and the IMU 8 to compute the position and orientation of the participant, and this information is used to generate the proper image 20 on the display, as well as correct spatial audio, so that the participant both sees and hears the virtual scene from his/her correct position and orientation in the physical room. [0051], [0069] - Users has their own VR headset 14 “first, second VR headset”); sending position and orientation of each VR headset via a wired data connection to each participants computer (0006], [0022] - The headset comprises an infrared camera mounted on the housing. The headset comprises a microprocessor having an internal IMU mounted to the housing which processes a location image captured by the camera to determine a position and orientation of the participant headset which is used to generate the display image on the display. [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset”.); each computer using the position and orientation and view independent scene data to render left and right eye views of a virtual scene ([0088] - the headset 14 contains two cameras, one for each eye. Each camera 28 is positioned and oriented 55 to point toward the plate 9 so that its reflection is optically superimposed upon the corresponding eye 4 of the participant 111. Infrared light from the outside world will enter through each reflecting lens 29, then will be reflected upward from the corresponding plate toward the camera 28. The effect will be that the camera 28 will be optically superimposed upon the participant's eye window, and therefore that each camera 28 will see exactly what is seen by the corresponding eye of the participant, left and right eyes “render left and right eye views of the virtual scene”.); Perlin1 in view of Perlin2 disclose sending via the wired connection to the computer of each participant, left/right image pairs from a stereo color camera of each VR headset of each participant (Perlin2 discloses [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots. Perlin1 discloses [0056] The computer processor is contained within a SmartPhone attachable to the headset 14 which also contains the display screen 18 and can contain the IMU 8. the computer processor is attached to the headset 14 via a wire that can provide both power and data connection with the headset 14. It is know at the time of the effective filing date of the claimed invention, there has to have a network interface between a computer processor attached, physically connect, to another device. [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14, “first and second VR headset, each participant”); each computer compositing the left/right image pairs over a rendered scene forever camera pixels are green (Perlin2 discloses [0074] - to produce left/right images, to interleave them to create a red/green/blue composite. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots.) ; and sending resulting composite images from each computer to each associated VR headset via the wired data connection to be displayed in the associated VR headset (Perlin2 discloses [0074] - to produce left/right images, to interleave them to create a red/green/blue composite “. [0082] - By placing two such light/camera mechanisms side-by-side, and switching them on during opposite fields (left light on during the even fields, and right light on during the odd fields), the system is able to simultaneously capture two parallax displaced images of the glowing eye spots, “resulting composite images”. Perlin1 discloses [0051], [0069] - Users sit down in their seats in the auditorium 12, and each user puts on a VR headset 14 “first, second VR headset”. [0049] - Each of the left/right displayed images on the display screen). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with sending via the wired connection to the computer of each participant, left/right image pairs from a stereo color camera of each VR headset of each participant; each computer compositing the left/right image pairs over a rendered scene forever camera pixels are green; and sending resulting composite images from each computer to each associated VR headset via the wired data connection to be displayed in the associated VR headset as taught by Perlin2. The motivation for doing is to to give a true stereoscopic view of simulated objects. Perlin1 in view of Perlin2 do not however Schmidt discloses each computer is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.); step of streaming view (SUMMARY, column11 line 15 - digital data streams representing various types of information for diaply.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with each computer is adapted to be plugged into a wall outlet for power; computer which is adapted to be plugged into a wall outlet for power; step of streaming view as taught by Schmidt. The motivation for doing is to easily generated imagery. Response to Arguments Examiner suggests to amend a specific element in the claim that when reading a claim in light of the invention, it directs to a unique technology. The examiner can be reached at 571-270-0724 for further discussion. In response to the argument, 35 U.S.C. 101 Applicant asserts “The Examiner has raised a double patenting rejection in regard to claims 1-12 and U.S. patent application 12/120,287. In view of the amendments to the claims, applicant respectfully traverses this rejection. US patent application 12,120,287 does not teach, or suggest or, or have anything in the claims which covers the limitation °each computer is adapted to be plugged into a wall outlet for power" as now found in the claims of the above-identified patent application. Applicant requests the Examiner rescinded the double patenting rejection” Examiner disagrees. The amended claims are not patentably distinct form the applicant. US Patent 12,120,287 B2 does not explicitly disclose the limitations of claims 1 and 9: “computer which is adapted to be plugged into a wall outlet for power”. However, the features are well known in the art, as evidenced by Schmidt. In the same field of endeavor, Schmidt discloses: computer which is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.) Therefore, it would have been obvious to one of ordinary skill Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with computer which is adapted to be plugged into a wall outlet for power as taught by Schmidt. The motivation for doing is to easily generated imagery. US Patent 12,120,287 B2 does not explicitly disclose the limitations of claim 13: “each computer is adapted to be plugged into a wall outlet for power;”. However, the features are well known in the art, as evidenced by Schmidt. In the same field of endeavor, Schmidt discloses: each computer is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.) Therefore, it would have been obvious to one of ordinary skill Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with each computer is adapted to be plugged into a wall outlet for power as taught by Schmidt. The motivation for doing is to easily generated imagery. Claim Rejection Under 35 U.S.C. 103 Applicant asserts “Claim 1 now has the limitation of “each computer is adapted to be plugged into a wall outlet for power." Perlin 1 teaches the headset 14 comprises a microprocessor 27 having an internal IMU & mounted to the housing 30. See paragraph 19 and figure 6. There is no teaching or suggestion of any computer which is adapted to be plugged into a wall outlet for power” Examiner disagrees. Schmidt discloses each computer is adapted to be plugged into a wall outlet for power (column 9 line 40 - special-purpose computer devices may be used such as desktop computer system. It is known that a desktop computer gets its power through a plug in the wall.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Perlin1 in view of Perlin2 with each computer is adapted to be plugged into a wall outlet for power; computer which is adapted to be plugged into a wall outlet for power; step of streaming view as taught by Schmidt. The motivation for doing is to easily generated imagery. Regarding claims 2 – 8, and 10 – 12, the Applicant asserts that they are not obvious over based on their dependency from independent claims 1, and 9 respectively. The examiner cannot concur with the Applicant respectfully from same reason noted in the examiner’s response to argument asserted from claims 1, and 9 respectively. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571) 270-0724. The examiner can normally be reached on Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ming Wu/ Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
Oct 29, 2025
Non-Final Rejection — §103, §DP
Jan 30, 2026
Response Filed
Feb 13, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597109
SYSTEMS AND METHODS FOR GENERATING THREE-DIMENSIONAL MODELS USING CAPTURED VIDEO
2y 5m to grant Granted Apr 07, 2026
Patent 12579702
METHOD AND SYSTEM FOR ADAPTING A DIFFUSION MODEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579623
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567185
Method and system of creating and displaying a visually distinct rendering of an ultrasound image
2y 5m to grant Granted Mar 03, 2026
Patent 12548202
TEXTURE COORDINATE COMPRESSION USING CHART PARTITION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month