DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 5 December 2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1-10, 12-15 and 18-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
With respect to the Double Patenting rejection, on page 8 of the response filed 5 December 2025 the Applicant requests that the rejection be held in abeyance at least until the pending claims are in condition for allowance. Thus, the rejection is maintained.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-10, 12-15 and 18-24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-13 of U.S. Patent No. 12,181,670. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims are merely broader versions of the patented claims and therefore are anticipated by the patented claims.
Below is a comparison between present claim 1 and patented claim 5:
Present claim 1
Patented claim 5
An electronic device comprising:
An image generation device comprising:
circuitry configured to construct a virtual space to be displayed on a head-mounted display;
circuitry configured to construct a virtual world to be displayed on a head-mounted display;
generate, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual space in a field of view corresponding to a point of view of the player, and cause the head-mounted display to display the generated display image;
generate, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual world in a field of view corresponding to a point of view of the player, and cause the head-mounted display to display the generated display image;
acquire information regarding movement of a real-world object other than the player in a real-world space where the player is located; and
acquire information regarding presence of an obstacle which is not related to the player in a space where the player is located; and
cause the head-mounted display to display an object in the virtual world corresponding to the movement of the real-world object.
cause the head-mounted display to display a first object in the virtual world corresponding to the obstacle while the obstacle is present within the space.
wherein the circuitry is configured to: predict a direction of travel of the obstacle; cause the head-mounted display to display a second object moving in addition to the first object in accordance with a movement of the obstacle as the object indicating the presence of the obstacle; and label the second object with the direction of travel.
As shown above, the main difference between present claim 1 and patented claim 5 is that present claim 1 recites “real-world object other than the player” whereas patented claim 5 recites “obstacle” where “real-world object other than the player” is just a broader recitation. Thus, present claim 1 is merely a broader version of patented claim 5. Therefore, present claim 1 is anticipated by patented claim 5.
Claims 2-10, 12-15 and 18-24 are similarly rejected over claims 1-13 of U.S. Patent No. 12,181,670.
Claims 1-10, 12-15 and 18-24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of U.S. Patent No. 11,714,281. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims are merely broader versions of the patented claims and therefore are anticipated by the patented claims.
Below is a comparison between present claim 1 and patented claim 5:
Present claim 1
Patented claim 5
An electronic device comprising:
An image generation device comprising:
circuitry configured to construct a virtual space to be displayed on a head-mounted display;
a space construction section that constructs a virtual world to be displayed on a head-mounted display;
generate, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual space in a field of view corresponding to a point of view of the player, and cause the head-mounted display to display the generated display image;
an image generation section that generates, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual world in a field of view corresponding to a point of view of the player, and causes the head-mounted display to display the generated display image; and
acquire information regarding movement of a real-world object other than the player in a real-world space where the player is located; and
an obstacle information acquisition section that acquires information regarding presence of an obstacle which is not related to the player in a space where the player is located, wherein, only while the obstacle is present within the space, the space construction section displays a first object in the virtual world corresponding to the obstacle, and
cause the head-mounted display to display an object in the virtual world corresponding to the movement of the real-world object.
wherein the object is displayed at a position in the virtual world corresponding to a location of the obstacle in the space.
wherein the obstacle information acquisition section predicts a direction of travel of the obstacle, and the space construction section displays a second object moving in addition to the first object in accordance with a movement of the obstacle as the object indicating the presence of the obstacle, and labels the second object with the direction of travel
As shown above, the main difference between present claim 1 and patented claim 5 is that present claim 1 recites “real-world object other than the player” whereas patented claim 5 recites “obstacle” where “real-world object other than the player” is just a broader recitation. Thus, present claim 1 is merely a broader version of patented claim 5. Therefore, present claim 1 is anticipated by patented claim 5.
Claims 2-21 are similarly rejected over claims 1-12 of U.S. Patent No. 11,714,281.
Claims 1-10, 12-15 and 18-24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of U.S. Patent No. 11,460,697. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims are merely broader versions of the patented claims and therefore are anticipated by the patented claims.
Below is a comparison between present claim 1 and patented claim 3:
Present claim 1
Patented claim 3
An electronic device comprising:
An image generation device comprising:
circuitry configured to construct a virtual space to be displayed on a head-mounted display;
a space construction section that constructs a virtual world to be displayed on a head-mounted display;
generate, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual space in a field of view corresponding to a point of view of the player, and cause the head-mounted display to display the generated display image;
an image generation section that generates, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual world in a field of view corresponding to a point of view of the player, and causes the head-mounted display to display the generated display image; and
acquire information regarding movement of a real-world object other than the player in a real-world space where the player is located; and
a visitor information acquisition section that acquires information regarding presence of a visitor without a head-mounted display in a space where the player and the visitor are able to move, wherein, only while the visitor is present within the space, the space construction section displays a first object in the virtual world corresponding to the visitor,
cause the head-mounted display to display an object in the virtual world corresponding to the movement of the real-world object.
wherein the object is displayed at a position in the virtual world corresponding to a location of the visitor in the space.
wherein the visitor information acquisition section predicts a direction of travel of the visitor, and the space construction section displays a second object moving in addition to the first object in accordance with a movement of the visitor as the object indicating the presence of the visitor, and labels the second object with the direction of travel.
As shown above, the main difference between present claim 1 and patented claim 3 is that present claim 1 recites “real-world object other than the player” whereas patented claim 3 recites “visitor” where “real-world object other than the player” is just a broader recitation. Thus, present claim 1 is merely a broader version of patented claim 3. Therefore, present claim 1 is anticipated by patented claim 3.
Claims 2-21 are similarly rejected over claims 1-10 of U.S. Patent No. 11,460,697.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-10, 12-15 and 18-24 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Holz et al. (US 2020/0294311).
Regarding claim 1, Holz et al. disclose an electronic device (Figure 6) comprising:
circuitry (Figures 6-8) configured to
construct a virtual space to be displayed on a head-mounted display (Figure 8 and paragraph [0065]. See also Figure 4.);
generate, based on a position and a posture of a head of a player wearing the head-mounted display, a display image representing the virtual space in a field of view corresponding to a point of view of the player, and cause the head-mounted display to display the generated display image (Figure 8, HMD position and orientation is input into 820, “Virtual World mapping and update” section generates a display image representing the virtual space in a field of view corresponding to a point of view of the player, and causes the head-mounted display to display the generated display image. See paragraphs [0068]-[0069]. See also Figure 4.);
acquire information regarding movement of a real-world object other than the player in a real-world space where the player is located (Figure 8, 825/850 and paragraphs [0066] and [0069]. Specifically, see Figure 4, which illustrates an outdoor environment in which the “obstacles” are real-world objects that have movement [people walking, for example]); and
cause the head-mounted display to display an object in the virtual world corresponding to the movement of the real-world object (Figure 8, 855 and paragraph [0069]. Specifically, see Figure 4, which illustrates an outdoor environment in which the “obstacles” are real-world objects that have movement [people walking, for example] and that these people are generated in the virtual environment, see paragraphs [0037]-[0039].).
Regarding claim 2, Holz et al. disclose the electronic device of claim 1, further comprising:
a sensor configured to detect the presence movement of the real-world object in the real-world space where the player is located (Figure 7, VR roam tracking device 710, see paragraph [0066].).
Regarding claim 3, Holz et al. disclose the electronic device of claim 1, wherein the circuitry is configured to cause the head-mounted display to display the object at a position in the virtual world corresponding to a location of the real-world object in the real-world space (Figure 4).
Regarding claim 4, Holz et al. disclose the electronic device of claim 3, wherein the circuitry is configured to cause the head-mounted display to change the position of the object displayed in the virtual world as the real-world object moves in the real-world space (Figure 4 and paragraph [0039].).
Regarding claim 5, Holz et al. disclose the electronic device of claim 1, wherein the object is a graphic representation corresponding to the real-world object (Figure 4).
Regarding claim 6, Holz et al. disclose the electronic device of claim 2, wherein the circuitry is configured to determine whether the real-world object is within a predetermined distance of the head-mounted display (See the end of paragraph [0045]: “However, it is contemplated that a variety of sensors, including outside-in sensors, can be employed to facilitate the determination of, among other things, distances (e.g., relative to the HMD 610) or other characteristics of physical objects within corresponding tracking area(s) of the environmental sensor(s) 620b.” Thus, to be within the “tracking area(s)” then the objects must be within “a predetermined distance” i.e. the distance of the tracking area(s).).
Regarding claim 7, Holz et al. disclose the electronic device of claim 6, wherein the circuitry is configured to determine that the real-world object is within the real-world space where the player is located in a case that the real-world object is less than the predetermined distance from the head-mounted display (Figures 4-5, for example, along with paragraph [0045], if the objects are able to be detected by the sensors and displayed, then the objects are less than the predetermined distance from the head-mounted display [If they were more than the predetermined distance they would be outside the tracking area(s) and wouldn’t be detected.].).
Regarding claim 8, Holz et al. disclose the electronic device of claim 6, wherein the circuitry is configured to determine that the real-world object is outside the real-world space where the player is located in a case that the real-world object is more than the predetermined distance from the head-mounted display (See the explanation of claim 7 above: Figures 4-5, for example, along with paragraph [0045], if the objects are able to be detected by the sensors and displayed, then the objects are less than the predetermined distance from the head-mounted display. If they were more than the predetermined distance they would be outside the tracking area(s) and wouldn’t be detected.).
Regarding claim 9, this claim is rejected under the same rationale as claim 6.
Regarding claim 10, this claim is rejected under the same rationale as claim 7.
Regarding claim 12, please refer to the rejection of claims 1-2 and 6 [the “predetermined condition” in claim 12 is the “predetermined distance” of claim 6], and furthermore also Holz et al. also disclose the head mounted display (Figure 6) comprises: a display (Paragraph [0043]); a sensor (Paragraphs [0043] and [0045]), and circuitry (Figures 6-8 and see also paragraphs [0048]-[0049].).
Regarding claim 13, this claim is rejected under the same rationale as claim 3.
Regarding claim 14, this claim is rejected under the same rationale as claim 6.
Regarding claim 15, this claim is rejected under the same rationale as claim 8.
Regarding claim 18, this claim is rejected under the same rationale as claim 12.
Regarding claim 19, this claim is rejected under the same rationale as claim 14.
Regarding claim 20, this claim is rejected under the same rationale as claim 15.
Regarding claim 21, this claim is rejected under the same rationale as claim 1.
Regarding claim 22, this claim is rejected under the same rationale as claims 2-3.
Regarding claim 23, this claim is rejected under the same rationale as claim 2.
Regarding claim 24, this claim is rejected under the same rationale as claim 22.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN G SHERMAN whose telephone number is (571)272-2941. The examiner can normally be reached Monday - Friday, 8:00am - 4pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMR AWAD can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHEN G SHERMAN/Primary Examiner, Art Unit 2621
9 February 2026