DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02 January 2026 has been entered.
Response to Arguments
Applicant’s arguments have been fully considered but they are moot in view of the new grounds of rejection presented in this Office Action.
Claim Objections
The claims are objected to because of the following informalities: Claim 12 recites “an interactive display screen” in line 4, but it recites “the interactive touchscreen display screen” in line 9. The first recitation should be amended to recite “an interactive touchscreen display screen” for consistency. Claim 20 contains the same informality and should be amended in the same way. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 10-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hui et al. (US 2019/0295312; hereinafter “Hui”).
Regarding claim 1, Hui discloses A system for generating visual content (“augmented reality projection of backgrounds for filmmaking,” para. 7), the system comprising: a video camera (e.g. camera 510 of Fig. 5); an interactive touchscreen display screen configured to display visual content (“a touchscreen sensor 739 [of Fig. 7] may be integrated into or make up a part of the display … it may be capacitive or resistive touchscreen,” para. 74), wherein the interactive touchscreen display screen is within a field of view of the video camera; a user viewing the interactive touchscreen display screen is at least partially within the field of view of the video camera (“the position of the camera is detected at 1230 [of Fig. 12]. This is the camera that is filming the display as a background with the human superimposed in-between,” para. 115); the user viewing the interactive touchscreen display screen can provide input to the interactive touchscreen display screen by means of a touch input to the touchscreen display screen (“track interactions using ‘touch’ sensors,” para. 109); and one or more processors (e.g. processor 210 of Fig. 2) configured to: receive video camera position and/or orientation data indicative of a viewpoint of the video camera (“the position of the camera is detected at 1230 [of Fig. 12],” para. 115); generate, for display on the interactive touchscreen display screen, a three-dimensional projection image based on the video camera position and/or orientation data, such that the three-dimensional projection image shows a three-dimensional content item from the viewpoint of the video camera (“the position of the camera is calculated so that the scene shown on the display may be rendered appropriately,” para. 115; “the three-dimensional scene is shown,” para. 117; “present a three-dimensional world to a viewer on a two-dimensional display,” para. 58); and adjust the display of the three-dimensional projection image displayed on the interactive touchscreen display screen based on the input provided to the interactive touchscreen display screen by the user viewing the interactive touchscreen display screen (“Using this touchscreen sensor 739 [of Fig. 7], interactions with the images shown on the display 730 may be enabled,” para. 74; “track interactions using ‘touch’ sensors … This processing may be to update the scene,” para. 109).
Hui discloses two embodiments: 1) Figs. 5-6 and associated paragraphs, where a moving video camera 510/610 is tracked in order to update the 3D rendering displayed on the display; and 2) Fig. 7 and associated paragraphs, where a fixed camera 710 tracks a moving human in order to update the 3D rendering displayed on the display, which includes a touchscreen. Since the first embodiment illustrated in Figs. 5-6 does not specifically illustrate a touchscreen, and the second embodiment illustrated in Fig. 7 (with a touchscreen) does not specifically illustrate tracking a moving video camera, Hui does not explicitly disclose both the touchscreen features and the video camera position tracking features in a single embodiment. Therefore, an obviousness rationale is used in this rejection.
Fig. 12 of Hui illustrates both human tracking 1220 and video camera tracking 1230 in a single embodiment. Hui describes, in reference to Fig. 12, “there may be multiple cameras and/or trackers. So, the camera tracking the human, if it is fixed relative to a display, may not require any calibration. However, for the camera filming the scene with the display as the active background, calibration may be required” (para. 113). This suggests combining the features of Figs. 5-6 (“the camera filming the scene with the display as the active background”) with the features of Fig. 7 (“the camera tracking the human,” including a touchscreen).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to combine the moving camera tracking features of Hui (Figs. 5-6) with the human tracking (including a touchscreen) features of Hui (Fig. 7), especially since Fig. 12 illustrates both moving camera tracking and human tracking. The motivation would have been to increase user efficiency by allowing for more intuitive interaction and a more realistic display of 3D rendered graphical objects.
Regarding claim 2, Hui renders obvious determining relative position and/or orientation data indicative of a position and/or an orientation of the video camera relative to a position and/or an orientation of the interactive touchscreen display screen (“calculating the position of the camera, relative to the display,” para. 37).
Regarding claim 3, Hui renders obvious a video camera tracking module configured to track a position and/or an orientation of the video camera, wherein the one or more processors receive the video camera position and/or orientation data from the video camera tracking module (“camera 110 [of Fig. 1] may be tracked … The workstation 120 is a computing device, discussed below with reference to FIG. 2, that is responsible for calculating the position of the camera, relative to the display 130, using the trackers,” paras. 36-37).
Regarding claim 4, Hui renders obvious wherein the viewpoint is a first viewpoint and the video camera position and/or orientation data is first video camera position and/or orientation data (camera 510 of Fig. 5 has a first viewpoint/position/orientation), and wherein the one or more processors are further configured to: receive second video camera position and/or orientation data indicative of a second viewpoint different to the first viewpoint (camera 610 of Fig. 6 has a second viewpoint/position/orientation); and generate, for display on the interactive touchscreen display screen, an updated three-dimensional projection image based on the second video camera position and/or orientation data, such that the updated three-dimensional projection image shows the three-dimensional content item from the second viewpoint (“the position of the camera 610 (the viewer) has shifted to the right and now, objects that were slightly behind the actor 660 from that perspective, have moved out from behind the actor,” para. 67; “derive the appropriate new perspective in real-time and to alter the display accordingly,” para. 68).
Regarding claim 5, Hui renders obvious wherein the second video camera position and/or orientation data is indicative of an updated viewpoint of the video camera (“derive the appropriate new perspective in real-time,” para. 68).
Regarding claim 6, Hui renders obvious wherein: the video camera is a first video camera; the system further comprises a second video camera different to the first video camera (“two cameras can be used,” para. 70), wherein the interactive touchscreen display screen is within a field of view of the second video camera; the second video camera position and/or orientation data is indicative of a viewpoint of the second video camera; and the one or more processors are configured to generate the updated three-dimensional projection image in response to receiving an indication that the interactive touchscreen display screen is to be viewed from the viewpoint of the second video camera (“two cameras can be used … the trackers 642 and 644 [of Fig. 6] can actually track the locations of both cameras and the associated workstation can alternate between images intended for a first camera and those intended for a second camera. In such a way, different perspectives for the same background may be captured using the same display,” para. 70).
Regarding claim 7, Hui renders obvious wherein: the interactive touchscreen display screen is a first display screen, and the three-dimensional projection image is a first three-dimensional projection image; the system further comprises a second display screen configured to display visual content, wherein the second display screen is within the field of view of the video camera; and wherein the one or more processors are further configured to generate, for display on the second display screen, a second three-dimensional projection image based on the video camera position and/or orientation data (“The display 130 may be an amalgamation of many smaller displays, placed next to one another,” para. 40; since Hui teaches the display being within a field of view of the video camera, and the display is made up of first and second displays, this teaches first and second displays within the FOV of the camera and displaying appropriate portions of a 3D scene on each display).
Regarding claim 10, Hui renders obvious a first computing device configured to receive the video camera position and/or orientation data (“The trackers 312, 342, and 344 [of Fig. 3] each include a tracking system 314, 347, 349 … may operate to track the camera 310,” para. 53); and a second computing device comprising a second one of the one or more processors (e.g. Workstation 320 of Fig. 3; “The workstation … incorporating a relatively high-end processor,” para. 37), wherein the second computing device is in communication with the first computing device over a network (“all interconnected by network,” para. 30), and wherein the second one of the one of more processors is configured to: receive the video camera position and/or orientation data from the first computing device over the network (“The workstation 320 [of Fig. 3] includes the positional calculation … The positional calculation 322 uses data generated by the tracking systems 314, 347, 349, in each of the trackers 312, 342, 344, to generate positional data for the camera,” para. 55); and generate, for display on the interactive touchscreen display screen, the three-dimensional projection image (“The display 330 [of Fig. 3] displays images provided by the workstation 320,” para. 54; “The workstation 320 [of Fig. 3] includes … the image generation 324,” para. 55).
Hui does not specifically recite the tracking systems comprising a first one of the one or more processors.
The Examiner takes Official Notice that both the concepts and the advantages of a tracking system comprising a processor were well known and expected in the art before the effective filing date of the claimed invention, and it would have been obvious before the effective filing date of the claimed invention to include a processor in the tracking system of Hui in order to increase efficiency and assist with calculations.
Regarding claim 11, Hui renders obvious receive a user interaction with the three-dimensional content item; and adjust the display of the three-dimensional projection image based on the user interaction (“Using this touchscreen sensor 739 [of Fig. 7], interactions with the images shown on the display 730 may be enabled,” para. 74; “track interactions using ‘touch’ sensors … This processing may be to update the scene,” para. 109).
Regarding claims 12, 13, 14-17, and 19, they are rejected using the same citations and rationales described in the rejections of claims 1, 2, 4-7, and 11, respectively.
Regarding claim 20, it is rejected using the same citations and rationales described in the rejection of claim 1, with the additional limitation of A non-transitory computer-readable medium comprising instructions which, when executed by one or more processors of a computing device, cause the computing device to carry out steps (“memory 212 [of Fig. 2] also provides a storage area for data and instructions associated with applications … and explicitly excludes transitory media,” Hui, para. 46).
Claims 8, 9, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Hui in view of Thurston, Ill et al. (US 2023/0186550; hereinafter “Thurston”).
Regarding claim 8, Hui renders obvious wherein the interactive touchscreen display screen can be moved from an initial position and orientation within the field of view of the video camera to a different position and/or orientation within the field of view of the video camera (any display can be moved).
Hui does not disclose receive display screen position and/or orientation data indicative of a position and/or an orientation of the interactive touchscreen display screen; and generate the three-dimensional projection image based on the video camera position and/or orientation data and the display screen position and/or orientation data.
In the same art of generating 3D graphics on a background display based on camera position, Thurston teaches receive display screen position and/or orientation data indicative of a position and/or an orientation of the interactive display screen; and generate the three-dimensional projection image based on the video camera position and/or orientation data and the display screen position and/or orientation data ("Rendering can take into account a camera position of a camera in a stage environment that is to be used to capture a captured scene, a display position of a virtual scene display in the stage environment," abstract).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Thurston to the touchscreen display Hui. The motivation would have been "to provide additional realism" (Thurston, para. 39).
Regarding claim 9, the combination of Hui and Thurston renders obvious a display screen tracking module configured to track the position and/or the orientation of the interactive touchscreen display screen during movement of the interactive touchscreen display screen from the initial position and orientation within the field of view of the video camera to the different position and/or orientation within the field of view of the video camera, wherein the one or more processors receive the display screen position and/or orientation data from the display screen tracking module (“determining the display position includes reading data from display position sensors placed on the virtual scene display,” Thurston, para. 10; see claim 8 for motivation to combine).
Regarding claim 18, it is rejected using the same citations and rationales described in the rejection of claim 8.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN MCCULLEY/Primary Examiner, Art Unit 2611