DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 14 March 2025, 16 June 2025, and 5 August 2025 are being considered by the examiner.
Claim Objections
Claims 23-24 are objected to because of the following informalities:
Claim 23 recites “said camera.” There is insufficient antecedent basis for this limitation in the claim. Since claim 23 depends from claims 1 and 20, “camera” should be changed to “disturbance sensor.”
Claim 24 recites “the camera.” There is insufficient antecedent basis for this limitation in the claim. Since claim 24 depends from claims 1 and 20, “camera” should be changed to “disturbance sensor.”
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-6 and 11-24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-16 of U.S. Patent No. 12,366,746. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims are merely broader versions of the patented claims.
Below is a comparison between present claim 1 and patented claim 1:
Present claim 1
Patented claim 1
A display system for facilitating provisioning of a virtual experience of a user, the display system interacting with a display,
A display device for facilitating a virtual experience for a user having a face, the display device comprising:
wherein the display is configured for displaying an image on said display based on at least one display data, wherein said user has a certain perception of said image based a spatial relationship between said display and said user,
a display for displaying an image in a position on said display based on at least one display data, wherein at least a portion of said face of said user and said display have a first spatial relationship, and wherein said at least a portion of said face and said image on said display have a second spatial relationship;
and at least one disturbance sensor configured for sensing a disturbance in said spatial relationship between said display and said user comprising:
at least one disturbance sensor for measuring a change in said first spatial relationship;
a processor communicatively coupled with said display device, wherein the processor is configured for:
a processor communicatively coupled with said display device and configured for:
receiving a signal from said disturbance sensor corresponding to a change in said spatial relationship;
receiving a signal from said disturbance sensor corresponding to said change;
based on said signal, modifying said at least one display data to modify said image on said display such that said certain perception is maintained.
based on said signal, modifying said at least one display data to modify said position of said image on said display such that said second spatial relationship remains essentially the same.
As shown above, beside wording, the main difference between the claims is that patented claim 1 recites “measuring a change in said first spatial relationship” and “modifying said at least one display data to modify said position of said image on said display such that said second spatial relationship remains essentially the same” whereas present claim 1 recites “sensing a disturbance in said spatial relationship” and “modifying said at least one display data to modify said image on said display such that said certain perception is maintained.” Thus, the recitations in present claim 1 are merely broader than the recitations in patented claim 1. Therefore, present claim 1 is anticipated by patented claim1.
Claims 2-6 and 11-24 are similarly rejected over claims 1-16 of U.S. Patent No. 12,366,746.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 6, 11 and 20-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Morifugi et al. (US 2015/0198808).
Regarding claim 1, Morifugi et al. disclose a display system for facilitating provisioning of a virtual experience of a user (Figure 1), the display system interacting with a display (Figure 1, display 113L and 113R), wherein the display is configured for displaying an image on said display based on at least one display data (Figure 1, left-eye and right-eye image data, which is input into 111.), wherein said user has a certain perception of said image based a spatial relationship between said display and said user (See Figures 2 and 3), and at least one disturbance sensor configured for sensing a disturbance in said spatial relationship between said display and said user (Figure 1, 103R and 103L are at least one disturbance sensor.) comprising:
a processor communicatively coupled with said display device (Figure 1, 111/116 constitute a processor), wherein the processor is configured for:
receiving a signal from said disturbance sensor corresponding to a change in said spatial relationship (Figure 1, signals from 103R and 103L to 114/115. See paragraph [0063].);
based on said signal, modifying said at least one display data to modify said image on said display such that said certain perception is maintained (Figure 1, image correcting unit 111 and Figure 2-4 and paragraphs [0064]-[0067], based on the distance of the display to the user, i.e. the spatial relationship, the display data is modified so that the certain perception is maintained.).
Regarding claim 6, this claim is rejected under the same rationale as claim 1.
Regarding claim 11, Morifugi et al. disclose the device of claim 1, wherein said display is attached to wearable (Figure 1 and paragraph [0042], HMD is a wearable.).
Regarding claim 20, Morifugi et al. disclose the device of claim 1, wherein said change is based on a spatial parameter of said at least a portion of said face (Morifugi et al.: Paragraph [0045], “based on” eyes of the user.).
Regarding claim 21, Morifugi et al. disclose the device of claim 20, wherein said at least one portion of said face comprises eyes of said user (Morifugi et al.: Paragraph [0045], “based on” eyes of the user.).
Regarding claim 22, Morifugi et al. disclose the device of claim 21, wherein said spatial parameter is independent of a gaze of said user (Morifugi et al.: Paragraph [0045], “based on” eyes of the user, but not the gaze, but rather the eyeball position and thus is “independent of a gaze.”).
Regarding claim 23, Morifugi et al. disclose the device of claim 20, wherein said change is based on a displacement of said spatial parameter relative to said camera (Morifugi et al.: Paragraphs [0045] and [0048]-[0049]).
Regarding claim 24, Morifugi et al. disclose the device of claim 20, wherein said change is based on a rotation about at least one axis of said spatial parameter relative to the camera (Figures 2-3, see θ relative to the horizontal direction, which is a rotation.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-4 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Morifugi et al. (US 2015/0198808) in view of Moore et al. (US 2012/0287040).
Regarding claim 2, Morifugi et al. disclose the wearable device of claim 1.
Morifugi et al. fail to teach wherein the at least one display data comprises a rendering of at least a portion of a virtual airspace to which is mapped a position of the first vehicle and at least one other vehicle.
Moore et al. disclose a virtual airspace to which is mapped a position of a first vehicle and at least one other vehicle ([0013] ... The image may include various combinations of information (also known as, “symbology”) related to weapon targeting and/or vehicle operation. Examples of weapon targeting information include friendly and hostile target tracking, Moore’s friendly or hostile target vehicle is considered the claimed “at least one other vehicle”).
Therefore, it would have been obvious to “one of ordinary skill” in the art before the effective filing date of the claimed invention to use the aircraft teachings of Moore et al. in the wearable device taught by Morifugi et al. such that that the wearable display device of Morifugi et al. is used for a pilot. The motivation to combine would have been in order to improve and expand the functionality of the HMD (See paragraph [0001] of Moore et al.).
Regarding claim 3, Morifugi et al. and Moore et al. disclose the wearable device of claim 2, wherein the position of the first vehicle with respect to the position of the at least one other vehicle in the real world differs from that of the position of the first vehicle with respect to the position of the at least one other vehicle in the virtual airspace (In the combination, Figure 3 and paragraph [0013] of Moore et al., clearly the distance to real world objects and then displayed virtual objects will be different since the display screen of the wearable device is small and distance to real world objects is large when in an airplane. As the vehicle teachings of Moore et al. are already added in claim 2, the motivation for claim 3 is the same as that for claim 2.).
Regarding claim 4, Morifugi et al. and Moore et al. disclose the wearable device of claim 2, wherein the at least one other vehicle comprises a simulated vehicle (Moore et al.: [0013] ... The image may include various combinations of information (also known as, “symbology”) related to weapon targeting and/or vehicle operation. Examples of weapon targeting information include friendly and hostile target tracking, Moore’s friendly or hostile target vehicle is considered the claimed “at least one other vehicle” which would be simulated. As the vehicle teachings of Moore et al. are already added in claim 2, the motivation for claim 4 is the same as that for claim 2.).
Regarding claim 12, Morifugi et al. disclose the device of claim 11.
Morifugi et al. fail to teach wherein said wearable is a helmet.
Moore et al. disclose wherein a wearable is a helmet (Figure 1).
Therefore, it would have been obvious to “one of ordinary skill” in the art before the effective filing date of the claimed invention to use the aircraft environment and helmet teachings of Moore et al. in the wearable device taught by Morifugi et al. such that that the wearable display device of Morifugi et al. is a helmet and used for a pilot. The motivation to combine would have been in order to improve and expand the functionality of the HMD (See paragraph [0001] of Moore et al.).
Regarding claim 13, Morifugi et al. and Moore et al. disclose the device of claim 12, wherein said display is a see-through display (Moore et al.: Figure 1. As the helmet teachings of Moore et al. are already added in claim 12, the motivation for claim 13 is the same as that for claim 12.).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Morifugi et al. (US 2015/0198808) in view of Lee et al. (US 2016/0196018).
Regarding claim 5, Morifugi et al. disclose the wearable device of claim 1.
Morifugi et al. fail to teach wherein receiving the at least one display data further comprises receiving authentication data.
Lee et al. disclose wherein receiving at least one display data comprises receiving authentication data (Paragraphs [0130] and [0170], the cursor 320 may be located in any one of the plurality of items in the item list 310. When the cursor 320 is located in any one of the plurality of items, for example, an item 315, the item 315 may be highlighted. The selected item 315 being highlighted as the claimed authentication data.).
Therefore, it would have been obvious to “one of ordinary skill” in the art before the effective filing date of the claimed invention to use the highlight teachings of Lee et al. in the wearable device taught by Morifugi et al.. The motivation to combine would have been in order to allow the user to visually confirm the selection or intended functionality, thus preventing any erroneous or unintended actions.
Claims 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Morifugi et al. (US 2015/0198808) in view of Lewis et al. (US 2013/0050070)
Regarding claim 14, Morifugi et al. disclose the device of claim 1.
Morifugi et al. fail to explicitly teach wherein said at least one disturbance sensor comprises a camera.
Lewis et al. disclose wherein at least one disturbance sensor comprises a camera (Paragraph [0057]: at least one sensor 134 is an IR camera).
Thus, Morifugi et al. and Lewis et al. each disclose IR sensors for detecting a user’s eyes. A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the IR cameras of Lewis et al. could have been substituted for the IR sensors of Morifugi et al. because both BLANK. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of providing detection of the user’s eyes.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the IR cameras of Lewis et al. for the IR sensors of Morifugi et al. according to known methods to yield the predictable result of providing detection of the user’s eyes.
Regarding claim 15, Morifugi et al. and Lewis et al. disclose the device of claim 14, wherein said camera is connected to said display (Figure 1 of Morifugi et al. [in the combination]).
Regarding claim 16, Morifugi et al. and Lewis et al. disclose the device of claim 14, wherein said camera is configured to capture multiple facial images of said at least a portion of a face of said user to determine said change (Figure 1 of Morifugi et al. and Figure 1E of Lewis et al. in combination teach that there is an IR camera for each eye and thus multiple facial images [of the eyes] are used.).
Regarding claim 17, Morifugi et al. and Lewis et al. disclose the device of claim 16, wherein said change is determined by applying at least one image transform to said facial images (Lewis et al.: Figure 11 and paragraph [0139]: “…the gaze detection coordinate system is treated as an auxiliary coordinate system for which a rotation matrix Ri can transform points between the auxiliary coordinate systems for each plane and a single world coordinate system such as the third coordinate system which relates the position of the detection area 139 to the illuminators 153…”).
Regarding claim 18, Morifugi et al. Lewis et al. disclose the device of claim 16, further comprising a calibration input configured to cause a reference facial image to be captured from which said change is determined (Lewis et al.: Paragraph [0103]: “In many embodiments, the optical axis is determined and a small correction determined through user calibration is applied to obtain the visual axis which is selected as the gaze vector. For each user, a small virtual object may be displayed by the display device at each of a number of predetermined positions at different horizontal and vertical positions. An optical axis may be computed for during display of the object at each position, and a ray modeled as extending from the position into the user eye. An offset angle with horizontal and vertical components may be determined based on how the optical axis must be moved to align with the modeled ray. From the different positions, an average offset angle with horizontal or vertical components can be selected as the small correction to be applied to each computed optical axis. In some embodiments, only a horizontal component is used for the offset angle correction.” This, in combination with Figure 2 of Morifugi et al., which shows the inclination of 0, allows for calibration input configured to cause a reference facial image to be captured from which said change is determined.).
Regarding claim 19, Morifugi et al. and Lewis et al. disclose the device of claim 16, wherein said camera captures a facial image upon detection of a disturbance (In the combination, the cameras capture facial images [of the eyes] are detected to correct for a “disturbance” and thus are captured “upon detection of a disturbance” see , for example, Figures 2 and 4(a) versus Figures 3 and 4(b)-(c) of Morifugi et al., where Figure 2 shows there is no disturbance and Figure 3 shows there is a disturbance, where the images are captured and the Figures 4(b)-(c) show the corrections applied.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN G SHERMAN whose telephone number is (571)272-2941. The examiner can normally be reached Monday - Friday, 8:00am - 4pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMR AWAD can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHEN G SHERMAN/Primary Examiner, Art Unit 2621
29 January 2026