DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/19/2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1-4, 6-9, 12-13, 15, 18, 21-24, 26-29, 32-33, 35, 38 have been considered but are moot as discussed in new ground of rejection below.
Claims 5, 10-11, 14, 16-17, 19-20, 25, 30-31, 34, 36-37, 39-40 have been canceled.
Applicant argues Sharma alone, or in combination with Jayaram, does not disclose the feature of claim 1 of “determining a plurality of coordinates corresponding to a boundary of an enhanced image portion of the composite image.” Sharma is silent on coordinates, corresponding to a boundary of an enhanced image portion or otherwise. Jayaram does not cure the deficiencies of Sharma (page 12).
Although Examiner does not agree with Applicant’s argument, to provide a clear support that limitation “of determining a plurality of coordinates corresponding to a boundary of an enhanced image portion of the composite image” as newly added into the amended claim, Matsuda (US 8994721: see for example, figures 4, 7, 10, 12-13, col. 9, line 61-col. 10, line 53, col. 11, lines 41-57), which discloses determining a plurality of coordinates correspond to a boundary of virtual image portion/object (non-displayed image of PC) of the composite image 150, is relied on for the teaching as discussed below.
For reasons given above, rejection of claims 1-4, 6-9, 12-13, 15, 18, 21-24, 26-29, 32-33, 35, 38 are discussed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-9, 12-13, 15, 18, 21-24, 26-29, 32-33, 35, 38 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al. (US 20230171456) in view of Jayaram et al. (US 20220295040), and further in view of Matsuda (US 8994721).
Note: all documents that are directly or indirectly incorporated by reference in their entireties Jayaram (paragraph 0001), are treated as part of the specification of Sharma or Jayaram respectively (see MPEP 2163.07 b).
Regarding claim 1, Sharma discloses a method comprising:
identifying a content item to be displayed at a physical display associated with a physical display device (PDD) according to a first viewing frustum (identifying a content item (106) to be displayed at a physical display/screen/monitor associated with remote device 104 according to viewing region or image/frame of content item 106 – see include, but are not limited to, figures 1-2, paragraphs 0031, 0033) ;
receiving a request to provide, at an XR device, a composite image of the content item that spans across a second viewing frustum, wherein the second viewing frustum is larger in size than the first viewing frustum and includes the first viewing frustum (receiving a request to provide, at an XR device 102, an image of the content item 106 with UI elements including AR overlay, metadata, etc. that spans across second viewing region comprising area 124, 126, 128, etc. on display 110, wherein the second viewing space with regions 124,126,128…is larger in size than the viewing region 124 for item 106 – see include, but are not limited to, figures 1-2, paragraphs 0033, 0047, 0049, 0050);
in response to the receiving of the request to provide the composite image of the content item:
determining a primary image portion of the composite image to be displayed at the physical display associated with the PDD, the primary image portion spanning a first viewing frustum; and
determining enhanced image portion of the composite image that corresponds to the primary image portion and that is to be displayed virtually within the second viewing frustum and outside the first viewing frustum, wherein the enhanced image portion maintains image continuity with the primary image portion such that at least one object having its first portion displayed in the primary image portion maintains image continuity with its second portion displayed in the enhanced image portion (in response to receiving of the request to provide the image of the content item 106 with UI elements and additional information, metadata, closed caption, etc. of content item 106: determining primary portion (image of content 106) to be displayed at remote device 104/202, the primary image portion spanning at first viewing region for mirror stream 124; and determining an enhanced image portion with additional information of AR overlay, metadata, etc. that to be displayed virtually within viewing region associated with elements 126, 128 and outside the region for content 124, wherein the enhanced image portion maintains image continuity with the primary image portion of content item 106 such that at list one object (image, frame, thing, etc.) having its first portion of video content displayed in the primary image portion maintains image of media content continuity with its second portion of UI element, title, caption, subtitle, etc. see include, but are not limited to, figures 1-2, 5, 9B-10, paragraphs 0033, 0036, 0044, 0047, 0049-0050, 0053, 0092, 0107 and discussion in “response to arguments” above); and
providing, at a display of the XR device, the composite image including the primary image portion and enhanced image portion of the content item, including:
displaying as a see-through at the display of the XR device, the primary image portion that is displayed on the physical display associated with the PDD, wherein the see-through allows viewing the primary image portion via the display of the XR device (AR portion comprises mirror stream 124 may be at least partially transparent so a user can still see media stream 106 as displayed on remote device 104 – see for example, figures 1-2, paragraph 0049); and
generating for display, at the display of the XR device, the enhanced image portion of the composite image such that the displaying of the enhanced image portion is displayed with the display of the primary image portion and such that the enhanced image portion is spatially anchored to the primary image portion (generating for display, at the display 110 of the XR device 102, the enhanced image portion with metadata, XR overlay of the image such that the display of the enhanced image portion with XR overlay is displayed with the display of the primary image portion of content item 106 and such that the enhanced image portion is spatially anchored to the primary portion image portion of content item 106 – see include, but are not limited to, figures 1-2, paragraphs 0033, 0036, 0048-0050).
Sharma further discloses the AR overlay and additional information may be modified based on the subject matter of media stream 106 (see figures 1-2, paragraph 0042); the data corresponding to the option may be retrieved by a server and synched with a playback service corresponding to the media stream being accessed so that the appropriate closed caption or subtitle content are displayed on AR device – paragraph 0048). However, Sharma does not explicitly disclose time-synchronized, and determining a plurality of coordinates corresponding to a boundary of an enhanced image portion of the composite image, generating for display based at least in part on a plurality of coordinates, the enhanced portion.
Jayaram discloses generating for display, at a display of the XR device (mobile or AR device), enhanced image portion of composite image such that displaying of the enhanced image portion is displayed time-synchronized with the display of primary image portion (time-synchronized based on time synchronizer 2863) and such that the enhanced image portion is spatially anchored to the primary image portion (see include, but are not limited to, figures 18, 20-21, 26, 28-32, paragraphs 0132-0133, 0148, 0154-0157).
In addition to Sharman, Jayaram discloses enhanced image portion maintains image continuity with primary image portion such that at least one object (enhanced image portion of AR display maintains image/frame of video continuity with primary image portion on the television such that at least one image/frame/object displayed on, for example, AR display device 1823, mobile device, etc.) having its first portion displayed in the primary image portion maintains image continuity with its second portion displayed in the enhanced image portion (image/frame/object displayed on AR display device 1823 and/or mobile device having its first portion of content of television 1800 displayed in primary image portion maintains image of content on television continuity on region 1800 with its second portion of a table, vase, flower, etc. displayed in the enhanced image portion of the AR display – see include, but are not limited to, figures 18-19, 21-22, paragraphs 0124-0126, 0128, 0132, 0134, 0154).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sharma with the teaching of time-synchronized of the contents as taught by Jayaram in order to yield predictable result of enhancing viewing experience (see paragraph 0154).
Additionally and/or alternatively to Sharman, Matsuda discloses
receiving, at an XR device, a composite image of content item that span across a second viewing frustum, wherein the second viewing frustum is larger in size than first viewing frustum and includes the first viewing frustum (receiving, at eyeglass 141, a composite image of content item that span a cross a second viewing part, wherein the second viewing part is larger in size than first viewing part and includes first viewing part – see include, but are not limited to, figures 2, 4, 7, 10-14, col. 7, lines 5-57);
determining a primary image portion of the composite image to be displayed at physical display associated with PDD, the primary image portion spanning a first viewing frustum (determining a primary image portion (e.g.,151) of the composite image to be displayed associated with the PC, the primary image portion spanning a first viewing part – see include, but are not limited to, figures 2, 4, 7, 10-14, col. 6, lines 58-col. 7, line 2);
determining a plurality of coordinates corresponding to a boundary of an enhanced image portion of the composite image (determining a plurality of coordinates with non-displayed data corresponding to portion of virtual object corresponding to a boundary of an portion of virtual object of the composite image – see include, but are not limited to, figures 2, 4- 7, 10-14, col. 9, lines 61-col. 10, line 18, lines 44-53, col. 11, line 41-col. 12, line 9) ; and
determining the enhanced image portion of the composite image that corresponds to the primary image portion and that is to be displayed virtually within the second viewing frustum and outside the first viewing frustum, wherein the enhanced image portion maintains image continuity with the primary image portion such that at least one object having its first portion displayed in the primary image portion maintains image continuity with its second portion displayed in the enhanced image portion – see include, but are not limited to, figures 2, 4, 7, 10-14, col. 7, lines 5-18, lines 45-57, col. 9, lines 17-42); and
providing, at a display of the XR device, the composite image including the primary image portion and enhanced image portion of the content item, including generating for display, at the display of the XR device and based at least in part on the plurality of coordinates, the enhanced image portion of the composite image (– see include, but are not limited to, figures 2, 4- 7, 10-14, col. 9, lines 61-col. 10, line 18, lines 44-53, col. 11, line 41-col. 12, line 9).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sharma in view of Jayaram with the teachings including determining a plurality of coordinates corresponding to a boundary of an enhanced image portion of the composite image, and generating, for display at the XR device and based at least in part on the plurality of coordinates, the enhance image portion of the composite image as taught by Matsuda in order to yield predictable result of allowing user to observe the whole sheet to be processed within feeling a sense of incongruity (col. 7, lines 10-18, col. 9, lines 35-38).
Regarding claim 2, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein the composite image occupies only a portion of the display of the XR device and the display of the XR device allows viewability of an entirety of a field of view (FOV) from the XR device (composite image with frame of video content and additional information comprises only a portion of the display of the XR device and the display of the XR device allows viewability of an entirety of a field of view (FOV) 108 from XR device 102 – see include, but are not limited to, Sharma: figures 1-2, 5, paragraphs 0031-0033, 0036,0047, 0052; Jayaram: paragraphs 0134, 0166, 0168; Matsuda: figures 2, 4- 7, 10-14).
Regarding claim 3, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein providing, at the display of the XR device, the composite image further comprises:
generating, at a server, the composite image (generating, at a server/content source, the composite image of video content and additional content – see include, but are not limited to, Sharma: figure 3, paragraphs 0006, 0011, 0036, 0042, 0050, 0075-0077; Jayaram: figures 3-4, 20, 26, 28-32);
identifying, from the composite image, the primary and enhanced image portions (identifying, from the composite image, the image portion and enhanced image portion with AR overlay/additional information - see include, but are not limited to, Sharma: figure 3, paragraphs 0006, 0011, 0036, 0042, 0050; Jayaram: figures 3-4, 20, 26, 28-32); and
encoding, at the server, a first encoded stream carrying the primary image portion and a second encoded stream carrying the enhanced image portion (encoding/processing, at the server, a stream/portion carrying the image portion of the video content and a second stream/portion carrying the enhanced image portion with UI item, AR overlay - see include, but are not limited to, Sharma: figure 3, paragraphs 0006, 0011, 0042, 0050; Jayaram: figures 3-4, 20, 26, 28-32, paragraphs 0047, 0132, 0153-0158; Matsuda: figures 1, 4-5).
Regarding claim 4, Sharma in view of Jayaram and Matsuda discloses the method of claim 3, further comprising: transmitting, from the server to the PDD, the first and second encoded streams (transmitting from the content source/server to the remote device/TV, the streams with video portion and additional portion - see include, but are not limited to, Sharma: figure 3, paragraphs 0006, 0011, 0036, 0042, 0050, 0059; Jayaram: figures 3-4, 20, 26, 28-32, paragraphs 0047, 0132, 0153-0158);
receiving, at the PDD, the first and second encoded streams (receiving, at the ; and remote device/smart television, portion for video image and portion for additional content including metadata - see include, but are not limited to, Sharma: figure 3, paragraphs 0006, 0011, 0036, 0042, 0050, 0076-0077; Jayaram: figures 3-4, 20, 26, 28-32, paragraphs 0047, 0132, 0153-0158);
at the PDD: (i) decoding the first encoded stream to obtain the primary image portion (ii) storing, at a memory, the primary image portion; and (iii) transmitting, to the XR device, the second encoded stream (at the smart television, set top box with display, etc., decoding/processing the image portion, and downloading the image portion of media content, and transmitting, the mobile and/or XR device, the second portion with additional/metadata, closed caption content, etc. - see include, but are not limited to, Sharma: figures 1-3, 9A, 9B, paragraphs 0036, 0048-0050, 0053, 0056, 0064, 0072, 0076-0077; Jayaram: figures 18, 20, 26, 28, paragraphs 0130, 0150, 0153, 0157);
decoding, at the XR device, the second encoded stream to obtain the enhanced image portion; and
in accordance with one or more time-synchronized clocks, causing simultaneous display, via the PDD and the XR device, of (i) the primary image portion at the physical display, and (ii) the enhanced image portion at the display of the XR device (decoding/processing at the XR device, the second encoded stream based on captured/retrieved content to obtained the enhanced content with metadata, XR overlay, etc. and based on time synchronization clocks, causing simultaneous display via the remote device and the XR device, image portion of video content at the display of remote device, and enhanced image portion at the XR device – - see include, but are not limited to, Sharma: figures 1-3, 9A, 9B, paragraphs 0036, 0048-0050, 0053, 0056, 0064, 0072, 0076-0077; Jayaram: figures 18, 20, 26, 28, paragraphs 0130-0131, 0150, 0153, 0157. See also, Matsuda: figures 1-2, 4-5, 7, 10-14).
See also Wingert et al. (US 20130036442: for example, paragraphs 0024-0026, 0029) for teaching of storing primary image portion at television device and sending enhanced content portion to secondary device
Regarding claim 6, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein providing, at the display of the XR device, the composite image further comprises: generating, at a server, the composite image;
identifying, from the composite image, the primary and enhanced image portions; encoding, at the server, a first encoded stream carrying the primary image portion and a second encoded stream carrying the second content portion;
transmitting, from the server to a PDD, the first encoded stream;
transmitting, from the server to the XR device, the second encoded stream;
at the PDD: (i) decoding the first encoded stream to obtain the primary image portion; and (ii) storing, at a memory, the primary image portion;
decoding, at the XR device, the second encoded stream to obtain the enhanced image portion; and in accordance with one or more time-synchronized clocks, causing simultaneous display, via the PDD and the XR device, of (i) the primary image portion at the physical display, and (ii) the enhanced image portion at the display of the XR device (see similar discussion in the rejection of claims 3-4 and Sharma: figures 1-3; Jayaram: figures 20, 26, 28-31. It is noted that the limitation “transmitting from the server to the XR device” could be interpreted as either transmitting via the PDD device as recited in claim 4 or transmitting directly from server/content source enhanced portion to the XR device. See also Matsuda: figures 1-2, 4-5).
Regarding claim 7, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein providing, at the display of the XR device, the composite image further comprises:
generating, at a server, the composite image;
encoding, at the server, a given encoded stream carrying the composite image;
transmitting, at the server, the given encoded stream;
receiving the given encoded stream at a first device, the first device selected from one of the XR device or the PDD;
decoding, at the first device, the first encoded stream to obtain the composite image;
identifying, from the composite image, the primary and enhanced image portions; encoding, at the first device, a particular encoded stream carrying the primary or enhanced image portion;
transmitting, at the first device, the particular encoded stream;
receiving the particular encoded stream at a second device, the second device being the one of the XR device or the PDD that was not selected as the first device;
decoding, at the second device, the particular encoded stream to obtain the primary or enhanced image portion; and
in accordance with one or more time-synchronized clocks, causing simultaneous display, via the PDD and the XR device, of (i) the primary image portion at the physical display, and (ii) the enhanced image portion at the display of the XR device (see similar discussion in the rejection of claims 3-4, and see include, but are not limited to, Sharma: figures 1-3, paragraphs 0006, 0011, 0031-32,0042, 0050; Jayaram: figures 3-4, 20, 26, 28-32, paragraphs 0047, 0132, 0153-0158), wherein the “first device” and “second device” respectively correspond to the “PPD device” and “XR device”. See also Matsuda: figures 1-2, 4-5, 7, 10-14).
Regarding claim 8, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein providing, at the display of the XR device, the composite image further comprises:
generating, at a server, the composite image; encoding, at the server, a given encoded stream carrying the composite image; transmitting, at the server, the given encoded stream; receiving, at a third device, the given encoded stream; decoding, at the third device, the given encoded stream to obtain the composite image; identifying, from the composite image, the primary and enhanced image portions; encoding, at the third device, a first encoded stream carrying the primary image portion and a second encoded stream carrying the enhanced image portion; transmitting, at the third device, the first and second encoded streams; receiving the first encoded stream at the PDD and the second encoded stream at the XR device; decoding, at the PDD and the XR device, the first and second encoded streams to obtain the primary and enhanced image portions; and in accordance with one or more time-synchronized clocks, causing simultaneous display, via the PDD and the XR device, of (i) the primary image portion at the physical display, and (ii) the enhanced image portion at the display of the XR device (see similar discussion in the rejection of claims 3-4, and see include, but are not limited to, Sharma: figures 1-3, paragraphs 0006, 0011, 0031-32,0042, 0050; Jayaram: figures 3-4, 20, 26, 28-32, paragraphs 0047, 0132, 0153-0158), wherein the “third device” correspond to the “PPD device” . See also Matsuda: figures 1-2, 4-5, 7, 10-14)
Regarding claim 9, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein the physical display is communicatively coupled to the PDD (display/screen is communicatively coupled to the remote device – see include, but are not limited to, Sharma: figures 1-3, paragraphs and wherein providing, at the display of the XR device, the composite image further comprises: generating, at the PDD, the composite image;
identifying, from the composite image, the primary and enhanced image portions of the composite image;
encoding, at the PDD, an encoded stream carrying the enhanced image portion; transmitting, at the PDD, the encoded stream;
receiving, at the XR device, the encoded stream;
decoding, at the XR device, the encoded stream; and
in accordance with one or more time-synchronized clocks, causing simultaneous display, via the PDD and the XR device, of (i) the primary image portion at the physical display, and (ii) the enhanced image portion at the display of the XR device (identifying from image received at the computing device 302 and/or remote device with smart television video content and additional information with metadata, elements, etc., processing/retrieving and transmitting the metadata and additional content; the mobile/XR device receives the additional content/metadata retrieved from the remote device/computing device, processing the retrieved device and displaying the additional device simultaneous with the video content displayed on display of remote device based on clock-synchronization - see similar discussion in the rejection of claims 3-4, and see include, but are not limited to, Sharma: figures 1-3, 6, 9A-9B, paragraphs 0006, 0011, ,0036, 0042, 0048-0050, 0053, 0067, 0081-0083, 0092; Jayaram: figures 3-4, 20, 26, 28-32, paragraphs 0047, 0132, 0153-0158; See also Matsuda: figures 1-2, 4-5, 7, 10-14).
Regarding claim 12, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, wherein the composite image includes a predetermined boundary, wherein the primary image portion and the enhanced image portion are displayed within the predetermined boundary of the composite image (see include, but are not limited to, Sharma: figures 1-2; Jayaram: figures 1-2, 19, 21, paragraphs 0134, 0149, 0169, wherein “predetermined boundary” is read as boundary, region/area for displaying video content image and additional content/VR overlay on screen. See also Matsuda: figures 1-2, 4, 7, 10-14).
Regarding claim 13, Sharma in view of Jayaram and Matsuda discloses the method of claim 12, wherein the predetermined boundary of composite image is determined based on any one or more of rendering capability, encoding capability, bandwidth, and network QoS of a host device (boundary is determined based on rendering capability such as rendering region/display size/location of the remote device and/or XR device - see include, but are not limited to, Sharma: figures 1-2, para. 0032, 0049; Jayaram: figures 1-2, 19, 21, paragraphs 0134, 0149, 0169).
Regarding claim 15, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, further comprising:
determining a depth of the primary image portion from a current location of the XR device (determining the depth/distance of the primary image portion of the current location of the XR with the remote device/video in field of view, out field of view, etc. – see include, but are not limited to, Sharma: figures 1-2, 4-6, 10, paragraphs 0032, 0043-0044, 0065-0066,0109; Jayaram: figures 2, 9, 14, 18-19, 21-22, paragraphs 0043-0044, 0050, 0084, 0126, 0152); and
aligning the depth of the enhanced image portion based on the determined depth of the primary image portion such that both the primary image portion and the enhanced image portion are at a same distance as perceived from a current location of the XR device (aligning/modifying the depth/distance of the enhanced image portion including size, whether to display on XR device, etc. based on the determined distance/depth of the primary image/video portion such that both the primary image portion and the enhanced image portion/additional information are at the same distance as perceived from the current location of the XR device - see include, but are not limited to, Sharma: figures 1-2, 4-6, 10, paragraphs 0032, 0043-0044, 0065-0066,0109; Jayaram: figures 2, 9, 14, 18-19, 21-22, paragraphs 0043-0044, 0050, 0084, 0126, 0152. . See also Matsuda: figures 1-2, 4-5, 7, 10-14).
Regarding claim 18, Sharma in view of Jayaram and Matsuda discloses the method of claim 1, further comprising, modifying font size, color and brightness of XR overlay, additional content and video content (see include, but are not limited to, Sharma: paragraphs 0042, 049, 0050, 0057, 0061, 0063). Thus, it would have been obvious to one of ordinary skill in the to incorporate in Sharma in view of Jayaram that method comprises color-matching the primary image portion with the enhanced image portion, wherein once color is matched, a perception of color and brightness of the content displayed in the primary image portion and the enhanced image portion match within a threshold range in order to yield predictable result of providing a smooth enhanced image with distinguished color between enhanced portion and primary image portion.
See also the teaching in Tinsman (US 20130031582: paragraph 0054, 0075, 0085, 0170).
Regarding claim 21, limitations of a system that correspond to the limitations of method of claim 1 are analyzed as discussed in the rejection of claim 1. Particularly, Sharma in view of Jayaram and Matsuda discloses the system (Sharma: figures 1-3; Jayaram: figures 26, 28-31) comprising:
communications circuitry configured to access a physical display device (PDD) and an XR device; and control circuitry configured to: identify a content item to be displayed at a physical display associated with the PDD according to a first viewing frustum;
receive a request to provide, at the XR device, a composite image of the content item that spans across a second viewing frustum, wherein the second viewing frustum is larger in size than the first viewing frustum and includes the first viewing frustum;
in response to the receiving of the request to provide the composite image of the content item, the control circuitry configured to:
determine a primary image portion of the composite image to be displayed at the physical display associated with the PDD, the primary image portion spanning a first viewing frustum;
determine a plurality of coordinates corresponding to a boundary of an enhanced image portion of the composite image; and
determine the enhanced image portion of the composite image that corresponds to the primary image portion and that is to be displayed virtually within the second viewing frustum and outside the first viewing frustum, wherein the enhanced image portion maintains image continuity with the primary image portion such that at least one object having its first portion displayed in the primary image portion maintains image continuity with its second portion displayed in the enhanced image portion; and
provide, at a display of the XR device, the composite image including the primary image portion and enhanced image portion of the content item, including: displaying as a see-through at the display of the XR device, the primary image portion that is displayed on the physical display associated with the PDD, wherein the see- through allows viewing the primary image portion via the display of the XR device; and
generating for display, at the display of the XR device and based at least in part on the plurality of coordinates, the enhanced image portion of the composite image such that the displaying of the enhanced image portion is time- synchronized with the display of the primary image portion and such that the enhanced image portion is spatially anchored to the primary image portion (see similar discussion in the rejection of claim 1 and include, but are not limited to, Sharma: figures 1-3; Jayaram: figures 20, 26, 28-31; Matsuda: figures 1-2, 4-5, 7, 10-14).
Regarding claims 22-24, 26-29, 32-33, 35, 38, the additional limitations of the system that correspond to the additional limitations of method in claims 2-4, 6-9, 12-13, 15, 18 are analyzed as discussed in the rejection of claims 2-4, 6-9, 12-13, 15, 18.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Alon et al. (US 20210383115) discloses system and methods for 3D scene augmentation and reconstruction.
Wang et al. (US 20230031023) discloses multiple camera system for combining images using coordinates of each image (figures 21A, 21B, paragraph 0255).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN SON PHI HUYNH whose telephone number is (571)272-7295. The examiner can normally be reached 9:00 am-6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER M. GOODARZI can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AN SON P HUYNH/Primary Examiner, Art Unit 2426
June 3, 2025