DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to Applicant’s communication filed on 10/07/2024. Claims 1-26 have been examined. Claims 27 – 32 are cancelled.
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on 10/07/2024 & 12/09/2025. The submission are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Objections
Claims 6-8 are objected to because of the following informalities:
With regards to claims 6-8, the claims recite “ The method of any of claim 1”. The examiner suggest amending the claims by deleting “any” to recite “ The method of claim 1” . Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9 -26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regards to claim 9, the claim recites “the system is configured to store data associated with the peripheral device on the companion device ….”. It is unclear from the language of the claim who is performing the storing limitation. It is unclear if the wearable device or companion device is performing the storing. Therefore, the examiner is unable to determine the metes and bounds of the claim language.
With regards to claims 18,19, 21-25, the claims recite “ the wearable device”. It is unclear what the wearable device is referring to because claim 18 recites “ a wearable device “ in the communicatively limitation and “a wearable device” in the receiving limitation. Therefore, the examiner is unable to determine the metes and bounds of the claim language. For the purpose of examination and based on the specification (Fig.3), the examiner will interpret that the two wearable devices are the same wearable device.
With regards to claim 18, the claim recites “ storing by the companion device, data associated with a peripheral device of the wearable device as peripheral data; receiving by the companion device from a wearable device, the peripheral data ….”. It is unclear how the companion device stores the peripheral data and then receive the same peripheral data. Therefore, the examiner is unable to determine the metes and bounds of the claim language.
Note: The specification (Fig.3) and claim 18 show that only one wearable device is communicatively coupled to a companion device.
With regards to claim 21, the claim recites “ communicating, by the companion device, the companion device peripheral data to the companion device”. It is unclear how the companion device communicate the peripheral data to the same companion device. Therefore, the examiner is unable to determine the metes and bounds of the claim language.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1,3-5,7-11,14-16,18-20,22-24 are rejected under 35 U.S.C. 102(a1) as being anticipated by Melkote Krishnaprasad et al. Publication No.US 2019/0333263 A1 (Melkote hereinafter).
Regarding claim 1,
Melkote teaches a method comprising:
communicatively coupling a wearable device with a companion device (Fig.2, ¶0003 &¶ 0005 -Split-rendered systems may include at least one host device and at least one client device that communicate over a network, at least one of the client devices may comprise a wearable display device);
mirroring, by the wearable device, data obtained from a peripheral device of the wearable device on the companion device as peripheral data including obtaining, by the wearable device, the peripheral data from the peripheral device, and communicating, by the wearable device, the peripheral data to the companion device (¶ 0006 -a host device renders an image based on the last head pose received from a head tracker of the wearable display device, by the time the image is rendered and available for display to a user on the wearable display device, the user's head pose may have moved - ¶ 0041 - wearable display device 16 outputs sensor and/or actuator data to host device 10. The sensor and/or actuator data may include data from an eye tracker that generates eye pose data indicating which area of a scene the user may be focusing on. The sensor and/or actuator data may include data from a header tracker that generates head pose data including orientation and/or position information of the user's head position for determining a user's field of view – ¶ 0087 - the display side 16 may output representation of eye pose data indicating user's area of focus from the eye tracker. The eye pose data may be used to indicate a region of a rendered frame that the user may be focusing on or is interested in. In 504, the display side 16 may output render pose data from the head tracker);
receiving, by the companion device, the peripheral data and processing the peripheral data into processed data (¶ 0006 - a host device renders an image based on the last head pose received from a head tracker of the wearable display device – Claim 1 - generating the rendered frame based on head tracking information of a user; identifying a region of interest (ROI) of the rendered frame; generating metadata for a warping operation from the ROI; and transmitting the rendered frame and the metadata for a warping operation of the rendered frame – ¶ 0041 -host device 10 may generate image content information for rendering a frame. For example, host device 10 may generate a compressed video and audio buffer using head pose data indicated by the sensor and/or actuator data);
sending, by the companion device, the processed data to the wearable device (Claim 1 - generating the rendered frame based on head tracking information of a user; identifying a region of interest (ROI) of the rendered frame; generating metadata for a warping operation from the ROI; and transmitting the rendered frame and the metadata for a warping operation of the rendered frame – ¶ 0041 -a user may have moved the wearable display device 16 such that the head pose has changed during the time for wearable display device 16 to transmit the eye pose data, for host device 10 to generate the compressed rendered video and audio buffers, and to transmit the compressed rendered video and audio buffers. To account for the change in head pose, wearable display device may perform time and/or space warping to correct for a rotation of a user's head and to correct for a movement of a user's field of view toward (or away from) an object in a scene);
receiving, by the wearable device, the processed data and utilizing the processed data to complete a computing process ( ¶ 0041 -user may have moved the wearable display device 16 such that the head pose has changed during the time for wearable display device 16 to transmit the eye pose data, for host device 10 to generate the compressed rendered video and audio buffers, and to transmit the compressed rendered video and audio buffers. To account for the change in head pose, wearable display device 16 may perform time and/or space warping to correct for a rotation of a user's head and to correct for a movement of a user's field of view toward (or away from) an object in a scene. ¶ 0087 - the display device 16 may receive eye-buffer of a rendered frame and render pose data, such as from the game engine/render side 10. In 508, the display device 16 may receive single depth metadata for a region of interest. The single depth metadata for the ROI may be the single depth approximation z* for the ROI computed from the harmonic mean of the pixel depths within the ROI. In 510, the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays).
Regarding claim 3,
Melkote further teaches
wherein the peripheral data is inertial measurement unit (IMU) data (¶ 0006 - correcting for camera translation and rotation (e.g., moving the wearable display device towards or away from a virtual object) from a position of the camera used to render a frame to a position of the camera when the rendered frame is displayed to the user on the wearable display device. When a host device renders an image based on the last head pose received from a head tracker of the wearable display device – ¶ 0067 - A head tracker 305 on the display side 16 may generate render pose 304 to indicate the user's field of view. The head tracker 305 may be a sensor, actuator, or other devices that may detect the orientation and position of the head of the user in 6 DOF- See ¶ 0071, ¶ 0087).
the computing process is a head pose operation (¶ 0012 - The method includes transmitting head tracking information of a user. The method also includes receiving a rendered frame and metadata. The rendered frame is based on the head tracking information and the metadata is based on a region of interest (ROI) of the rendered frame. The method further includes warping the rendered frame using the metadata and display pose information - ¶ 0026 -client device may first fully render a frame based on the received content, where the rendered frame is based on earlier head pose, and then the client device may perform an Asynchronous Time Warp (ATW) that corrects for a rotation of a user's head) , and
completing the computing process by the wearable device includes using a result of the head pose operation (¶ 0087 -the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays – ¶ 0092 -the game engine/render side 10 may
transmit the eye-buffer of rendered frame and the render pose data to the display side 16. In 714, the game engine/ render side 10 may transmit the single depth metadata for the ROI to the display side 16 for the display side 16 to perform the APR warping operation of the eye-buffer using the single depth metadata – See Also Claim 14).
Regarding claim 4,
Melkote further teaches
wherein the peripheral data is image data, the computing process is a head pose operation, and completing the computing process by the wearable device includes using a result of the head pose operation (¶ 0002 - The disclosure relates to processing of image content information and, more particularly, post-processing of image content information for output to a display - ¶ 0046 - The rendered frame may include an eye-buffer representing the image content of the scene in the rendered frame, and a Z-buffer representing the depth pixels of the scene in the rendered frame - ¶ 0087 -the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays – ¶ 0092 -the game engine/render side 10 may transmit the eye-buffer of rendered frame and the render pose data to the display side 16. In 714, the game engine/ render side 10 may transmit the single depth metadata for the ROI to the display side 16 for the display side 16 to perform the APR warping operation of the eye-buffer using the single depth metadata – See Also Claim 14).
Regarding claim 5,
Melkote further teaches
wherein the peripheral data is image data, the computing process is an eye tracking operation, and completing the computing process by the wearable device includes using a result of the eye tracking operation ((¶ 0002 - The disclosure relates to processing of image content information and, more particularly, post-processing of image content information for output to a display - ¶ 0046 - The rendered frame may include an eye-buffer representing the image content of the scene in the rendered frame, and a Z-buffer representing the depth pixels of the scene in the rendered frame – ¶ 0007 - The region of interest may be determined based on eye tracking or content information. For example, a host device of a split-rendered system may generate a single depth plane for a region of interest of a scene to emphasize contribution from the region of interest. The value and parameters for the single depth plane may be determined based on eye-tracking information – Claim 10 & 11 - transmitting head tracking information of a user; receiving a rendered frame and metadata, wherein the rendered frame is based on the head tracking information and the metadata is based on a region of interest (ROI) of the rendered frame -transmitting eye tracking information of the user, wherein the eye tracking information is used to determine the ROI – See Also ¶ 0009, ¶ 0087, ¶ 0089, ¶ 009).
Regarding claim 7,
Melkote further teaches
wherein the wearable device is smart glasses (¶ 0039 - wearable display device 16 may comprise a HMD device formed as glasses that include display screens in one or more of the eye lenses, and also include a nose bridge and temple arms to be worn on a user's – ¶ 0004 - as wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PD As), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs)).
Regarding claim 8,
Melkote further teaches
wherein the companion device is at least one of another wearable device, a mobile device, a smart phone, a tablet, a server, and a device including a processor and an operating system (¶ 0004 - wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PD As), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs) – See ¶ 0099 – ¶ 0100).
Regarding claim 9,
Melkote teaches a system comprising:
a wearable device; and a companion device, the wearable device including: a device client, a hardware abstraction layer, an operating system abstraction layer, and at least one peripheral device driver, the companion device including a runtime environment associated with the wearable device (¶ 0003 - Split-rendered systems may include at least one host device and at least one client device that communicate over a network ( e.g., a wireless network, wired network, etc.). For example, a Wi-Fi Direct (WFD) system includes multiple devices communicating over a Wi-Fi network. The host device acts as a wireless access point and sends image content information, which may include audio video (AV) data, audio data, and/or video data, to one or more client devices using one or more wireless communication standards, e.g., IEEE 802.11. The image content information may be played back at both a display of the host device and displays at each of the client devices, ¶ 0047 - FIG. 2 is a block diagram illustrating host device 10 and wearable display device 16 – ¶ 0048 - host device 10 includes an application processor 30, a wireless controller 36, a connection processor 38, and a multimedia processor);
store data associated with the peripheral device on the companion device as peripheral data( ¶ 087- . the display device 16 may receive eye-buffer of a rendered frame and render pose data, such as from the game engine/render side 10. In 508, the display device 16 may receive single depth metadata for a region of interest. The single depth metadata for the ROI may be the single depth approximation z* for the ROI computed from the harmonic mean of the pixel depths within the ROI – ¶ 0096 - the game engine/render side 10 may generate a rendered frame using the render pose data. the game engine/render side 10 may encode and transmit the encoded rendered frame to the display side 16 – ¶ 0071 - module 311 on the display side 16 may perform warping in APR using the single depth approximation z* 314, the eye buffer frame and render pose information 308 received from the game engine/render side 10, and the display pose 306 received from the head tracker 305, ¶ 0100, Claim 16, a memory storing processor readable code to receive a rendered frame and metadata -See Claim 29), including:
obtain, by the wearable device, the peripheral data from the peripheral device, and communicate, by the wearable device, the peripheral data to the companion device ¶ 0006 -a host device renders an image based on the last head pose received from a head tracker of the wearable display device, by the time the image is rendered and available for display to a user on the wearable display device, the user's head pose may have moved - ¶ 0041 - wearable display device 16 outputs sensor and/or actuator data to host device 10. The sensor and/or actuator data may include data from an eye tracker that generates eye pose data indicating which area of a scene the user may be focusing on. The sensor and/or actuator data may include data from a header tracker that generates head pose data including orientation and/or position information of the user's head position for determining a user's field of view – ¶ 0087 - the display side 16 may output representation of eye pose data indicating user's area of focus from the eye tracker. The eye pose data may be used to indicate a region of a rendered frame that the user may be focusing on or is interested in. In 504, the display side 16 may output render pose data from the head tracker);
receive, by the companion device, the peripheral data and processing the peripheral data into processed data(¶ 0006 - a host device renders an image based on the last head pose received from a head tracker of the wearable display device – Claim 1 - generating the rendered frame based on head tracking information of a user; identifying a region of interest (ROI) of the rendered frame; generating metadata for a warping operation from the ROI; and transmitting the rendered frame and the metadata for a warping operation of the rendered frame – ¶ 0041 -host device 10 may generate image content information for rendering a frame. For example, host device 10 may generate a compressed video and audio buffer using head pose data indicated by the sensor and/or actuator data) ;
send, by the companion device, the processed data to the wearable device(Claim 1 - generating the rendered frame based on head tracking information of a user; identifying a region of interest (ROI) of the rendered frame; generating metadata for a warping operation from the ROI; and transmitting the rendered frame and the metadata for a warping operation of the rendered frame – ¶ 0041 -a user may have moved the wearable display device 16 such that the head pose has changed during the time for wearable display device 16 to transmit the eye pose data, for host device 10 to generate the compressed rendered video and audio buffers, and to transmit the compressed rendered video and audio buffers. To account for the change in head pose, wearable display device may perform time and/or space warping to correct for a rotation of a user's head and to correct for a movement of a user's field of view toward (or away from) an object in a scene);
receive, by the wearable device, the processed data and utilizing the processed data to complete a computing process( ¶ 0041 -user may have moved the wearable display device 16 such that the head pose has changed during the time for wearable display device 16 to transmit the eye pose data, for host device 10 to generate the compressed rendered video and audio buffers, and to transmit the compressed rendered video and audio buffers. To account for the change in head pose, wearable display device 16 may perform time and/or space warping to correct for a rotation of a user's head and to correct for a movement of a user's field of view toward (or away from) an object in a scene. ¶ 0087 - the display device 16 may receive eye-buffer of a rendered frame and render pose data, such as from the game engine/render side 10. In 508, the display device 16 may receive single depth metadata for a region of interest. The single depth metadata for the ROI may be the single depth approximation z* for the ROI computed from the harmonic mean of the pixel depths within the ROI. In 510, the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays).
Regarding claim 10,
Melkote further teaches
wherein the wearable device is smart glasses (¶ 0039 - wearable display device 16 may comprise a HMD device formed as glasses that include display screens in one or more of the eye lenses, and also include a nose bridge and temple arms to be worn on a user's – ¶ 0004 - as wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PD As), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs)).
Regarding claim 11,
Melkote further teaches
wherein the companion device is at least one of another wearable device, a mobile device, a smart phone, a tablet, a server, and a device including a processor and an operating system (¶ 0004 - wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PD As), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs) – See ¶ 0099 – ¶ 0100).
Regarding claim 14,
Melkote further teaches
wherein the peripheral data is inertial measurement unit (IMU) data (¶ 0006 - correcting for camera translation and rotation (e.g., moving the wearable display device towards or away from a virtual object) from a position of the camera used to render a frame to a position of the camera when the rendered frame is displayed to the user on the wearable display device. When a host device renders an image based on the last head pose received from a head tracker of the wearable display device – ¶ 0067 - A head tracker 305 on the display side 16 may generate render pose 304 to indicate the user's field of view. The head tracker 305 may be a sensor, actuator, or other devices that may detect the orientation and position of the head of the user in 6 DOF- See ¶ 0071, ¶ 0087);
the computing process is a head pose operation (¶ 0012 - The method includes transmitting head tracking information of a user. The method also includes receiving a rendered frame and metadata. The rendered frame is based on the head tracking information and the metadata is based on a region of interest (ROI) of the rendered frame. The method further includes warping the rendered frame using the metadata and display pose information - ¶ 0026 -client device may first fully render a frame based on the received content, where the rendered frame is based on earlier head pose, and then the client device may perform an Asynchronous Time Warp (ATW) that corrects for a rotation of a user's head) , and
completing the computing process by the wearable device includes using a result of the head pose operation (¶ 0087 -the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays – ¶ 0092 -the game engine/render side 10 may transmit the eye-buffer of rendered frame and the render pose data to the display side 16. In 714, the game engine/ render side 10 may transmit the single depth metadata for the ROI to the display side 16 for the display side 16 to perform the APR warping operation of the eye-buffer using the single depth metadata – See Also Claim 14).
Regarding claim 15,
Melkote further teaches
wherein the peripheral data is image data, the computing process is a head pose operation, and completing the computing process by the wearable device includes using a result generated based on the head pose operation. (¶ 0002 - The disclosure relates to processing of image content information and, more particularly, post-processing of image content information for output to a display - ¶ 0046 - The rendered frame may include an eye-buffer representing the image content of the scene in the rendered frame, and a Z-buffer representing the depth pixels of the scene in the rendered frame - ¶ 0087 -the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays – ¶ 0092 -the game engine/render side 10 may transmit the eye-buffer of rendered frame and the render pose data to the display side 16. In 714, the game engine/ render side 10 may transmit the single depth metadata for the ROI to the display side 16 for the display side 16 to perform the APR warping operation of the eye-buffer using the single depth metadata – See Also Claim 14).
Regarding claim 16,
Melkote further teaches
wherein the peripheral data is image data, the computing process is an eye tracking operation, and completing the computing process by the wearable device includes using a result generated based on the eye tracking operation (¶ 0002 - The disclosure relates to processing of image content information and, more particularly, post-processing of image content information for output to a display - ¶ 0046 - The rendered frame may include an eye-buffer representing the image content of the scene in the rendered frame, and a Z-buffer representing the depth pixels of the scene in the rendered frame – ¶ 0007 - The region of interest may be determined based on eye tracking or content information. For example, a host device of a split-rendered system may generate a single depth plane for a region of interest of a scene to emphasize contribution from the region of interest. The value and parameters for the single depth plane may be determined based on eye-tracking information – Claim 10 & 11 - transmitting head tracking information of a user; receiving a rendered frame and metadata, wherein the rendered frame is based on the head tracking information and the metadata is based on a region of interest (ROI) of the rendered frame -transmitting eye tracking information of the user, wherein the eye tracking information is used to determine the ROI – See Also ¶ 0009, ¶ 0087, ¶ 0089, ¶ 009).
Regarding claim 18,
Melkote teaches a method comprising:
communicatively coupling a wearable device with a companion device(Fig.2, ¶ 0003 &¶ 0005 -Split-rendered systems may include at least one host device and at least one client device that communicate over a network, at least one of the client devices may comprise a wearable display device);
storing, by the companion device, data associated with a peripheral device of the wearable device as peripheral data( ¶ 087- . the display device 16 may receive eye-buffer of a rendered frame and render pose data, such as from the game engine/render side 10. In 508, the display device 16 may receive single depth metadata for a region of interest. The single depth metadata for the ROI may be the single depth approximation z* for the ROI computed from the harmonic mean of the pixel depths within the ROI – ¶ 0096 - the game engine/render side 10 may generate a rendered frame using the render pose data. the game engine/render side 10 may encode and transmit the encoded rendered frame to the display side 16 – ¶ 0071 - module 311 on the display side 16 may perform warping in APR using the single depth approximation z* 314, the eye buffer frame and render pose information 308 received from the game engine/render side 10, and the display pose 306 received from the head tracker 305, ¶ 0100, Claim 16, a memory storing processor readable code to receive a rendered frame and metadata -See Claim 29), including:
receiving, by the companion device from a wearable device, the peripheral data Claim 1 - generating the rendered frame based on head tracking information of a user; identifying a region of interest (ROI) of the rendered frame; generating metadata for a warping operation from the ROI; and transmitting the rendered frame and the metadata for a warping operation of the rendered frame – ¶ 0041 -a user may have moved the wearable display device 16 such that the head pose has changed during the time for wearable display device 16 to transmit the eye pose data, for host device 10 to generate the compressed rendered video and audio buffers, and to transmit the compressed rendered video and audio buffers. To account for the change in head pose, wearable display device may perform time and/or space warping to correct for a rotation of a user's head and to correct for a movement of a user's field of view toward (or away from) an object in a scene);
generating, by the companion device, a result associated with a completion of a computing process by the companion device, the computing process is configured to use the peripheral data; and communicating, by the companion device to the wearable device, the result associated with the completion of the computing process (¶ 0041 -user may have moved the wearable display device 16 such that the head pose has changed during the time for wearable display device 16 to transmit the eye pose data, for host device 10 to generate the compressed rendered video and audio buffers, and to transmit the compressed rendered video and audio buffers. To account for the change in head pose, wearable display device 16 may perform time and/or space warping to correct for a rotation of a user's head and to correct for a movement of a user's field of view toward (or away from) an object in a scene. ¶ 0087 - the display device 16 may receive eye-buffer of a rendered frame and render pose data, such as from the game engine/render side 10. In 508, the display device 16 may receive single depth metadata for a region of interest. The single depth metadata for the ROI may be the single depth approximation z* for the ROI computed from the harmonic mean of the pixel depths within the ROI. In 510, the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays).
Regarding claim 19,
Melkote further teaches
wherein the wearable device is smart glasses (¶ 0039 - wearable display device 16 may comprise a HMD device formed as glasses that include display screens in one or more of the eye lenses, and also include a nose bridge and temple arms to be worn on a user's – ¶ 0004 - as wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PD As), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs)).
Regarding claim 20.
Melkote further teaches
wherein the companion device is at least one of another wearable device, a mobile device, a smart phone, a tablet, a server, and a device including a processor and an operating system (¶ 0004 - wireless devices, one or more of the host device and the client devices may comprise mobile telephones, portable computers with wireless communication cards, personal digital assistants (PD As), portable media players, or other flash memory devices with wireless communication capabilities, including so-called "smart" phones and "smart" pads or tablets, or other types of wireless communication devices (WCDs) – See ¶ 0099 – ¶ 0100).
Regarding claim 22.
Melkote further teaches
wherein the peripheral data is inertial measurement unit (IMU) data (¶ 0006 - correcting for camera translation and rotation (e.g., moving the wearable display device towards or away from a virtual object) from a position of the camera used to render a frame to a position of the camera when the rendered frame is displayed to the user on the wearable display device. When a host device renders an image based on the last head pose received from a head tracker of the wearable display device – ¶ 0067 - A head tracker 305 on the display side 16 may generate render pose 304 to indicate the user's field of view. The head tracker 305 may be a sensor, actuator, or other devices that may detect the orientation and position of the head of the user in 6 DOF- See ¶ 0071, ¶ 0087);
the computing process is a head pose operation (¶ 0012 - The method includes transmitting head tracking information of a user. The method also includes receiving a rendered frame and metadata. The rendered frame is based on the head tracking information and the metadata is based on a region of interest (ROI) of the rendered frame. The method further includes warping the rendered frame using the metadata and display pose information - ¶ 0026 -client device may first fully render a frame based on the received content, where the rendered frame is based on earlier head pose, and then the client device may perform an Asynchronous Time Warp (ATW) that corrects for a rotation of a user's head) , and
completing the computing process by the wearable device includes using a result of the head pose operation (¶ 0087 -the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays – ¶ 0092 -the game engine/render side 10 may
transmit the eye-buffer of rendered frame and the render pose data to the display side 16. In 714, the game engine/ render side 10 may transmit the single depth metadata for the ROI to the display side 16 for the display side 16 to perform the APR warping operation of the eye-buffer using the single depth metadata – See Also Claim 14).
Regarding claim 23.
Melkote further teaches
wherein the peripheral data is image data, the computing process is a head pose operation, and completing the computing process by the wearable device includes using a result of the head pose operation (¶ 0002 - The disclosure relates to processing of image content information and, more particularly, post-processing of image content information for output to a display - ¶ 0046 - The rendered frame may include an eye-buffer representing the image content of the scene in the rendered frame, and a Z-buffer representing the depth pixels of the scene in the rendered frame - ¶ 0087 -the display side 16 may determine or receive the display pose data from the head tracker. In 512, the display side 16 may modify one or more pixel values of the eye-buffer of the rendered frame using the single depth metadata and display pose data to generate warped rendered frame. In 514, the display device 16 may output the warped rendered frame for display at one or more displays – ¶ 0092 -the game engine/render side 10 may transmit the eye-buffer of rendered frame and the render pose data to the display side 16. In 714, the game engine/ render side 10 may transmit the single depth metadata for the ROI to the display side 16 for the display side 16 to perform the APR warping operation of the eye-buffer using the single depth metadata – See Also Claim 14).
Regarding claim 24.
Melkote further teaches
wherein the peripheral data is image data, the computing process is an eye tracking operation, and completing the computing process by the wearable device includes using a result of the eye tracking operation ((¶ 0002 - The disclosure relates to processing of image content information and, more particularly, post-processing of image content information for output to a display - ¶ 0046 - The rendered frame may include an eye-buffer representing the image content of the scene in the rendered frame, and a Z-buffer representing the depth pixels of the scene in the rendered frame – ¶ 0007 - The region of interest may be determined based on eye tracking or content information. For example, a host device of a split-rendered system may generate a single depth plane for a region of interest of a scene to emphasize contribution from the region of interest. The value and parameters for the single depth plane may be determined based on eye-tracking information – Claim 10 & 11 - transmitting head tracking information of a user; receiving a rendered frame and metadata, wherein the rendered frame is based on the head tracking information and the metadata is based on a region of interest (ROI) of the rendered frame -transmitting eye tracking information of the user, wherein the eye tracking information is used to determine the ROI – See Also ¶ 0009, ¶ 0087, ¶ 0089, ¶ 009).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 12, 21 are rejected under 35 U.S.C. 103 as being unpatentable over Melkote in view of Venkatraman et al. Publication No. US 2020/0380984 A1 (Venkatraman hereinafter)
Regarding claim 2,
Melkote does not explicitly teach
storing, by the wearable device, data obtained from a peripheral device of the companion device on the wearable device as companion device peripheral data includes receiving, by the wearable device, the companion device peripheral data from the companion device
However, Venkatraman teaches
storing, by the wearable device, data obtained from a peripheral device of the companion device on the wearable device as companion device peripheral data includes receiving, by the wearable device, the companion device peripheral data from the companion device (¶ 0037 -wearable device 150 can provide sensor data from sensor(s) 154 and/or other device context information specific to wearable device 150 to user device 130 so that user device 130 can use the sensor data and/or context data on user device 130. When user device 130 receives the sensor data and/or context data from wearable device 150, user device 130 can store the sensor data and/or context data (e.g., sensor data is context data) in local context database 134 as if the context data for wearable device 150 is context data for user device 130 – ¶ 0039 - wearable device 150 can include a context store 152 that is independent of, but synchronized with context store 132 of user device 130. Context store 152 can have associated local and remote context databases in wearable device 150).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of Venkatraman. The motivation for doing so is to allow the system to provide for a multi-device context store in which context attributes of multiple devices can be synchronized (Abstract – Venkatraman).
Regarding claim 12,
Melkote does not explicitly teach
storing, by the wearable device, data obtained from a peripheral device of the companion device on the wearable device as companion device peripheral data includes receiving, by the wearable device, the companion device peripheral data from the companion device
However, Venkatraman teaches
storing, by the wearable device, data obtained from a peripheral device of the companion device on the wearable device as companion device peripheral data includes receiving, by the wearable device, the companion device peripheral data from the companion device (¶ 0037 -wearable device 150 can provide sensor data from sensor(s) 154 and/or other device context information specific to wearable device 150 to user device 130 so that user device 130 can use the sensor data and/or context data on user device 130. When user device 130 receives the sensor data and/or context data from wearable device 150, user device 130 can store the sensor data and/or context data (e.g., sensor data is context data) in local context database 134 as if the context data for wearable device 150 is context data for user device 130 – ¶ 0039 - wearable device 150 can include a context store 152 that is independent of, but synchronized with context store 132 of user device 130. Context store 152 can have associated local and remote context databases in wearable device 150).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of Venkatraman. The motivation for doing so is to allow the system to provide for a multi-device context store in which context attributes of multiple devices can be synchronized (Abstract – Venkatraman).
Regarding claim 21.
Melkote does not explicitly teach
mirroring, by the companion device, data obtained from a peripheral device of the companion device on the wearable device as companion device peripheral data includes communicating, by the companion device, the companion device peripheral data to the companion device.
However, Venkatraman teaches
mirroring, by the companion device, data obtained from a peripheral device of the companion device on the wearable device as companion device peripheral data includes communicating, by the companion device, the companion device peripheral data to the companion device (¶ 0037 -wearable device 150 can provide sensor data from sensor(s) 154 and/or other device context information specific to wearable device 150 to user device 130 so that user device 130 can use the sensor data and/or context data on user device 130. When user device 130 receives the sensor data and/or context data from wearable device 150, user device 130 can store the sensor data and/or context data (e.g., sensor data is context data) in local context database 134 as if the context data for wearable device 150 is context data for user device 130 – ¶ 0039 - wearable device 150 can include a context store 152 that is independent of, but synchronized with context store 132 of user device 130. Context store 152 can have associated local and remote context databases in wearable device 150).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of Venkatraman. The motivation for doing so is to allow the system to provide for a multi-device context store in which context attributes of multiple devices can be synchronized (Abstract – Venkatraman).
Claims 6,17,25 are rejected under 35 U.S.C. 103 as being unpatentable over Melkote in view of Boulanger et al. Publication No. US 2020/0394456 A1 ( Boulanger hereinafter)
Regarding claim 6,
Melkote further teaches
wherein the wearable device includes a first interface, the companion device includes a second interface communicatively coupled to the first interface, and storing the data associated with a peripheral device includes communicating the peripheral data between the first interface and the second interface (Fig.2 – shows that the wearable display device 15 includes wireless controller and connection processor and host device (companion device) include wireless controller and connection processor – ¶ 0089 - the game engine/render side may transmit the eye-buffer of rendered frame and the render pose data to the display side 16 – ¶ 0096 - the game engine/render side 10 may encode and transmit the encoded rendered frame to the display side 16 – Claim 24).
However, Melkote does not explicitly teach that the first and second interfaces are first and second sockets.
Boulanger teaches
a wearable device includes a first socket, the companion device includes a second socket communicatively coupled to the first socket, and storing the data associated with a peripheral device includes communicating the peripheral data between the first socket and the second socket (¶ 0171 - The client data communications endpoint may implement a socket interface, such as local UNIX domain sockets or TCP sockets or, in at least some embodiments, a hybrid socket interface that allows for both local UNIX domain sockets and/or TCP sockets in a single interface – ¶ 0172 - The host data communications endpoint may implement a corresponding socket interface, enabling sockets opened by an application program 724 or proxy service 726 of wearable computing device 710 to have endpoints on wearable computing device 710 and host computing device – ¶ 0162 - servers that implement client data communication endpoints, and the data routing service 730, which integrates with the server library. The API of the server library facilitates client remote procedure call (RPC) calls for TCP socket operations, as requested by the clients. The callbacks and callouts allow the data routing service 730 to frame RPC requests and socket data when sending it to the host computing device 740, and de-frame command responses and socket data coming from the host computing device 740 before returning it to the client application via the companion service library functions).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of Boulanger. The motivation for doing so is to allow applications to run across different machines and handle many simultaneous connections.
Regarding claim 17,
Melkote further teaches
wherein the wearable device includes a first interface, the companion device includes a second interface communicatively coupled to the first interface, and storing the data associated with a peripheral device includes communicating the peripheral data between the first interface and the second interface (Fig.2 – shows that the wearable display device 15 includes wireless controller and connection processor and host device (companion device) include wireless controller and connection processor – ¶ 0089 - the game engine/render side may transmit the eye-buffer of rendered frame and the render pose data to the display side 16 – ¶ 0096 - the game engine/render side 10 may encode and transmit the encoded rendered frame to the display side 16 – Claim 24).
However, Melkote does not explicitly teach that the first and second interfaces are first and second sockets.
Boulanger teaches
a wearable device includes a first socket, the companion device includes a second socket communicatively coupled to the first socket, and storing the data associated with a peripheral device includes communicating the peripheral data between the first socket and the second socket (¶ 0171 - The client data communications endpoint may implement a socket interface, such as local UNIX domain sockets or TCP sockets or, in at least some embodiments, a hybrid socket interface that allows for both local UNIX domain sockets and/or TCP sockets in a single interface – ¶ 0172 - The host data communications endpoint may implement a corresponding socket interface, enabling sockets opened by an application program 724 or proxy service 726 of wearable computing device 710 to have endpoints on wearable computing device 710 and host computing device – ¶ 0162 - servers that implement client data communication endpoints, and the data routing service 730, which integrates with the server library. The API of the server library facilitates client remote procedure call (RPC) calls for TCP socket operations, as requested by the clients. The callbacks and callouts allow the data routing service 730 to frame RPC requests and socket data when sending it to the host computing device 740, and de-frame command responses and socket data coming from the host computing device 740 before returning it to the client application via the companion service library functions).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of Boulanger. The motivation for doing so is to allow applications to run across different machines and handle many simultaneous connections.
Regarding claim 25.
Melkote further teaches
wherein the wearable device includes a first interface, the companion device includes a second interface communicatively coupled to the first interface, and storing the data associated with a peripheral device includes communicating the peripheral data between the first interface and the second interface (Fig.2 – shows that the wearable display device 15 includes wireless controller and connection processor and host device (companion device) include wireless controller and connection processor – ¶ 0089 - the game engine/render side may transmit the eye-buffer of rendered frame and the render pose data to the display side 16 – ¶ 0096 - the game engine/render side 10 may encode and transmit the encoded rendered frame to the display side 16 – Claim 24).
However, Melkote does not explicitly teach that the first and second interfaces are first and second sockets.
Boulanger teaches
a wearable device includes a first socket, the companion device includes a second socket communicatively coupled to the first socket, and storing the data associated with a peripheral device includes communicating the peripheral data between the first socket and the second socket (¶ 0171 - The client data communications endpoint may implement a socket interface, such as local UNIX domain sockets or TCP sockets or, in at least some embodiments, a hybrid socket interface that allows for both local UNIX domain sockets and/or TCP sockets in a single interface – ¶ 0172 - The host data communications endpoint may implement a corresponding socket interface, enabling sockets opened by an application program 724 or proxy service 726 of wearable computing device 710 to have endpoints on wearable computing device 710 and host computing device – ¶ 0162 - servers that implement client data communication endpoints, and the data routing service 730, which integrates with the server library. The API of the server library facilitates client remote procedure call (RPC) calls for TCP socket operations, as requested by the clients. The callbacks and callouts allow the data routing service 730 to frame RPC requests and socket data when sending it to the host computing device 740, and de-frame command responses and socket data coming from the host computing device 740 before returning it to the client application via the companion service library functions).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of Boulanger. The motivation for doing so is to allow applications to run across different machines and handle many simultaneous connections.
Claims 13,26 are rejected under 35 U.S.C. 103 as being unpatentable over Melkote in view of O’Hare et al. Publication No. US 2022/0188165 A1 (O’Hare hereinafter)
Regarding claim 13,
Melkote does not explicitly teach
wherein the runtime environment is a virtual runtime environment operating in as a background process on the companion device, and storing of the data associated with the peripheral device is included in the virtual runtime environment.
However, O’ Hare teaches
runtime environment is a virtual runtime environment operating in as a background process on the companion device, and storing of the data associated with the peripheral device is included in the virtual runtime environment (¶ 0018 - The IoT device 106 may execute the computing task 104 and return 112 the results 114 to the mobile device 102. The IoT device 106 may include a Java virtual machine (NM) 116 to execute the computing task 104 outsourced 110 from the mobile device – ¶ 0028 -a user of the proximate smart phone 212 may have enabled the device to accept workloads from nearby devices. This may be done in exchange for later processing of offloading computing workloads by the user of the proximate smart phone 212 – ¶ 0035 - The method 400 begins when a user device operating system (OS) 402 polls 404 sensors 406 for new data. The sensor data 408 is returned to the user device OS 402, which may then store 410 the sensor data 408 in a device storage 412. A device application 414 may then read 416 data 418 from the device storage 412 – ¶ 0040 -The OCP stack 422 then obtains 444 the data and code from the device application 414. The data and code from the device application 414 are then sent to the OCP enabled IoT device 106 in a message 446. The OCP enabled IoT device 106 completes the computations, and returns a message 448 to the OCP stack 422 with the processed data – ¶ 0062 -The code in the OCP extensions 530 in an offloading OCP enabled device 500 may create an OCP bundle 534, which includes the information required by the processing device to execute the task. The information may include a return IP address, to which the IP task results should be sent. The code for performing the offload task, such as the Java code to be executed on the JVM in the receiving OCP enabled device 500 ).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of O’Hare. The motivation for doing so is to allow applications to run flexibly across different hardware.
Regarding claim 26.
Melkote does not explicitly teach
wherein the companion device includes a virtual runtime environment, and the storing of the data associated with the peripheral device is included in the virtual runtime environment.
However, O’ Hare teaches
wherein the companion device includes a virtual runtime environment, and the storing of the data associated with the peripheral device is included in the virtual runtime environment (¶ 0018 - The IoT device 106 may execute the computing task 104 and return 112 the results 114 to the mobile device 102. The IoT device 106 may include a Java virtual machine (NM) 116 to execute the computing task 104 outsourced 110 from the mobile device – ¶ 0028 -a user of the proximate smart phone 212 may have enabled the device to accept workloads from nearby devices. This may be done in exchange for later processing of offloading computing workloads by the user of the proximate smart phone 212 – ¶ 0035 - The method 400 begins when a user device operating system (OS) 402 polls 404 sensors 406 for new data. The sensor data 408 is returned to the user device OS 402, which may then store 410 the sensor data 408 in a device storage 412. A device application 414 may then read 416 data 418 from the device storage 412 – ¶ 0040 -The OCP stack 422 then obtains 444 the data and code from the device application 414. The data and code from the device application 414 are then sent to the OCP enabled IoT device 106 in a message 446. The OCP enabled IoT device 106 completes the computations, and returns a message 448 to the OCP stack 422 with the processed data – ¶ 0062 -The code in the OCP extensions 530 in an offloading OCP enabled device 500 may create an OCP bundle 534, which includes the information required by the processing device to execute the task. The information may include a return IP address, to which the IP task results should be sent. The code for performing the offload task, such as the Java code to be executed on the JVM in the receiving OCP enabled device 500 )
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Melkote to include the teachings of O’Hare. The motivation for doing so is to allow applications to run flexibly across different hardware.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOUNES NAJI whose telephone number is (571)272-2659. The examiner can normally be reached Monday - Friday 8:30 AM -5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar A Louie can be reached at (571) 270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000
/YOUNES NAJI/Primary Examiner, Art Unit 2445