DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office correspondence is in response to the application number 18/803462 filed on August 13, 2024.
Claims 1 – 20 are pending.
Authorization for Internet Communications
The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03):
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on December 10, 2024 was filed on or after the mailing date of the application on August 13, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
35 USC § 101 Analysis – Judicial Exception
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The claimed invention is directed to statutory subject matter and are not rejected under 35 USC 101 because of a judicial exception.. The claimed subject matter is integrated into a practical application under prong 2 of the Step 2A analysis as documented in MPEP 2016.04(d). The claims are directed to non-abstract improvements in computer related technology. A claim is non-statutory when it is directed to a judicial exception (e.g. either one of mathematical concepts, mental processes, or certain methods of organizing human activity) without significantly more. The claimed invention is not directed to a judicial exception. Instead, the claimed invention is directed to a technological improvement for facilitating seamless split rendering session relocation when a quality of a rendered viewport for eXtended Reality (XR) media content from a current split rendering server becomes degraded using a process for monitoring, during a time period, one or more metrics associated with a split rendering session, the split rendering session being provided by a current split rendering server, wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server, and wherein the one or more metrics comprise one or more of a quality of service (QoS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a metric associated with the XR media content stream, and therein determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds, to relocate the split rendering session from the current split rendering server to a new split rendering server, and finally providing, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server. The ordered steps of the claim language provides a solution to address a problem in the art for lack of procedures for seamless offloading of split rendering tasks to split rendering servers under certain challenging scenarios. Therein the claims are statutory under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 2, 4 – 14, and 16 - 20 are rejected under 35 U.S.C. 103 as being un-patentable over Stockhammer et al. (U.S. 2024/0273829 A1; herein referred to as Stockhammer) in view of Stoica et al. (U.S. 2023/0199198 A1; herein referred to as Stoica) in further view of Takabayashi et al. (U.S. 2024/0054009 A1; herein referred to as Takabayashi).
In regard to claim 1, Stockhammer teaches An apparatus for a communication network (see ¶ [0009] “ . . . the apparatus is part of, and/or includes a wearable device, an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device), a head-mounted display (HMD) device, a wireless communication device . . .”) , the apparatus comprising ((see ¶ [0006] “ . . . an apparatus for XR management is provided that includes at least one memory and at least one processor coupled to the at least one memory . . .”).:
at least one processor (see Fig. 1 image processor 150, Host Processor 152) ; and
at least one memory (see Fig. 1 ROM 145) storing instructions of a real-time communication application function (RTC-AF) (see Fig. 2 ¶ [0050] “ . . . FIG. 2 is a block diagram illustrating an example architecture of an extended reality (XR) system 200. In some examples, the XR system 200 of FIG. 2 is, or includes, an XR system with a Media Capabilities for Augmented Reality (MeCAR) architecture, an EDGe-dependent Augmented Reality (EDGAR) architecture, another architecture discussed herein, or a combination thereof. . . .”), wherein the instructions, when executed by the at least one processor (see Fig. 1 ¶ [0049] “ . . . While the image capture and processing system 100 is shown to include certain components, one of ordinary skill will appreciate that the image capture and processing system 100 can include more components than those shown in FIG. 1. The components of the image capture and processing system 100 can include software, hardware, or one or more combinations of software and hardware. For example, in some implementations, the components of the image capture and processing system 100 can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, GPUs, DSPs, CPUs, and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein. The software and/or firmware can include one or more instructions stored on a computer-readable storage medium and executable by one or more processors of the electronic device implementing the image capture and processing system 100 . . .”) , cause the apparatus to perform at least: monitoring, during a time period (see Fig. 11, ¶ [0139] “ . . . dynamically providing the metrics and information of the signal quality on the tethering communication interface 1132, the audio and video decoding capabilities of the XR interface device 1102, the security capabilities of the XR interface device 1102, the security framework and capabilities of the device, statically and dynamically the bitrate and delay of the tethering communication interface 1132. The XR system can support usage of provided static and dynamic information, for example in XR management system 1104, to select the appropriate content formats, bitrates and qualities, possibly in a dynamic fashion, to negotiate with the XR processing device 1106 to support the appropriate content formats, bitrates and qualities, in some cases in a dynamic fashion, with split rendering in which the formats are provided from the rendering by the XR processing device 1106. . . .”), one or more metrics associated with a split rendering session (see Fig. 2 , ¶ [0054] “ . . . The XR system 200 is configured for split rendering, and thus at least a subset of processing tasks are to be performed by an XR management system and/or an external XR processing device (e.g., edge node, remote server) that is coupled to the XR system 200 via the network system 278, and/or that is part of the network system 278 (e.g., as in an edge node). For instance, the sensor data captured by the XR system (e.g., as discussed above), the user input(s) 248, and/or associated metadata are collected by the XR system 200. The XR runtime API 202 and the XR source management subsystem 244 collects and conveys these as XR media and metadata 252 (e.g., including image(s), input(s), pose, etc.) to a media access function (MAF) subsystem 276. The MAF subsystem 276 encodes, compresses, and/or encrypts the XR media and metadata 252 (e.g., using metadata codecs 290, video codecs 292, and/or audio codecs 294) to form uplink compressed media 272 that is sent out to the external XR processing device (e.g., edge node, remote server). . . “) the split rendering session being provided by a current split rendering server (e.g. remote server as external XR processing device) (see ¶ [0054] “ . . . The XR processing device decrypts, decompresses, decodes, and/or processes the uplink compressed media 272 to generate XR content. For instance, in some examples, the XR processing device adds virtual content to generate the XR content. The XR processing device encodes, compresses, and/or encrypts the resulting XR content, which is received via the network system 278 as downlink compressed media 274. The MAF subsystem 276 decrypts, decompresses, and/or decodes the downlink compressed media 274 to extract prerendered media 258 (e.g., 2D media, 2.5D media, and/or pose-dependent media), a scene description 256, or a combination thereof. . . .”),
wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server (see Fig. 7 ¶ [0098] “ . . . The XR system 700 is configured for split rendering, and thus at least a subset of processing tasks are to be performed by the XR management system 704 and/or an external XR processing device (e.g., edge node, remote server) that is coupled to the XR system 700 via the cellular network subsystem 776 of the XR management system 704, and/or that is part of the cellular network subsystem 776 (e.g., an edge node). For instance, the sensor data captured by the XR interface device 702 (e.g., as discussed above) and/or associated metadata is sent from the XR interface device 702 to the XR management system 704, for instance using the XR runtime API 740. The XR runtime API 740 also receives the user input(s) 748 and/or associated metadata. These inputs to the XR runtime API 740 are collected and/or combined and sent as media and/or sensor data 752 to the media access functions subsystem 760 of the XR management system 704. The media access functions subsystem 760 encodes, compresses, and/or encrypts the media and/or sensor data 752 to generate uplink media 772 that is sent to the external XR processing device through the cellular network subsystem 776 for further processing. The XR processing device decrypts, decompresses, decodes, and/or processes the uplink media 772 to generate XR content. For instance, in some examples, the XR processing device adds virtual content to generate the XR content. The XR processing device encodes, compresses, and/or encrypts the resulting XR content, which is received via the cellular network subsystem 776 as downlink compressed media 774. The MAF subsystem 760 decrypts, decompresses, and/or decodes the downlink compressed media 774 to extract primitives buffers 758 (e.g., XR content), a scene description 756, or a combination thereof. . . .”),
Stockhammer fails to explicitly teach,
However Stoica teaches
wherein the one or more metrics (see Fig. 12 ¶ [0206] “ . . . , the processor 1205 optimizes the plurality of information-to-importance functional kernels and associated graphical directed information flow weights using statistical learning based on a generic set of training data of video coded sequences to at least one of minimize expected metric and maximize expected metric of visual rendering quality based on the plurality of feature sets associated with each of the plurality of NAL units of the video coded stream . . “) comprise one or more of:
a quality of service (QoS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) of the current split rendering server, or a metric associated with the XR media content stream (see Fig.2 ¶ [0070] “ . . . FIG. 2 depicts a split-rendering architecture for mobile networks based on an edge/cloud video application server and an XR UE device 203. The device 203 is connected to a radio access network 208, which is in turn connected to the application server 202 via a core network 205. The application server 202 may deliver XR media based on local XR processed content or on remote XR processed content. The processing may account for and/or further process tracking and sensing information as uplinked by the XR UE device 203. The application server 202 streams the XR multimedia content via a content delivery gateway 210 to which the XR UE device 203 is connected via any real-time transport protocol. The XR device 203, after decoding the XR content received from the application server 202, may use its XR engine 212 and additional local hardware/software capabilities and/or XR pre-rendered content, and XR associated XR metadata to locally render the XR content on a display. . . “; see ¶ [0072] “ . . . , related to video coding domain, the interactivity involving these applications requires guarantees in terms of meeting packet error rate (“PER”) and packet delay budget (“PDB”) for the QoE of rendering the associated video streams at a UE. The video source jitter and wireless channel stochastic characteristics of mobile communications systems make the former challenging to meet especially for high-rate specific digital video transmissions, e.g., 4K, 3D video, 2×2K eye-buffered video, and/or the like. . . .”; see ¶ [0095] “ . . . The QoS associated with IP packets of the XR traffic is handled by the CN 404 via QoS flows 408 generated at the UPF within the established PDU session. This procedure is opaque to the RAN 406, which only manages the mapping of QoS flows 408 associated with the received IP packets to their corresponding DRBs given the QoS profile associated with the indicators of each QoS flow 408. In a 5GS for instance the QoS flows 408 will be characterized by the 5QI (see 3GPP Technical Specification TS 23.501 (V17.2.0—September 2021). System architecture for the 5G System (5GS); Stage 2 (Release 17)). . . .”) ,
it would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s application to incorporate systems and methods for awareness of configurations in wireless networks when XR streams are processed using split rendering by reviewing metrics that effect QOS for stream delivery and QOE for user experience, as taught by Stoica, into systems and methods of split rendering and pass-through compressed media formats for extended reality (XR) systems that perform the split rendering between a client and server, as taught by Stockhammer. Such incorporation provides a measuring tool for evaluating the performance of the split rendering function for the XL stream.
The combination of Stockhammer and Stoica fails to explicitly teach,
However Takabayashi teaches
determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds (see Fig. 1,¶ ¶ [0055-0057] “ . . . it is assumed that the optimal edge server 32 for performing each processing changes with movement of the camera 21 that is a video source and the terminal 22 consuming the content. For example, in a case where the position of the camera 21 is moved to the position of a camera 21′ on the video generation side, the optimal edge server 32 is changed from the edge server 32A to the edge server 32B. In this case, it is necessary to cause the media processing such as compression encoding of the baseband video stream performed in the edge server 32A to transition to the edge server 32B. Furthermore, for example, in a case where the position of the terminal 22 is moved to the position of the terminal 22′ on the content receiving side, the optimal edge server 32 is changed from the edge server 32C to the edge server 32D. In that case, it is necessary to cause the media processing such as transcoding of the content video to transition from the edge server 32C to the edge server 32D. . . “), to relocate the split rendering session from the current split rendering server (e.g. server 52A) to a new split rendering server (e.g. server 52B) (see Fig. 2,¶ ¶ [0061-0062] “ . . . FIG. 2 is a block diagram of a control processing system of the present disclosure that performs control to cause media processing to transition between different edge servers. A control processing system 50 of FIG. 2 includes a workflow management service 51 that manages media processing tasks and servers 52A and 52B. The workflow management service 51 is an application (program) executed on one or a plurality of servers, and manages a plurality of media processing tasks executed on one or a plurality of servers. For example, the workflow management service 51 performs task control to cause the media processing task being executed in the server 52A as the first server to transition to the server 52B as the second server ; and
providing, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server (see ¶ ¶ [0064-0067] “ . . . The workflow management service 51 determines the need for the task transition, and causes the media processing task 61A being executed in the server 52A to transition to the server 52B as necessary. Instead of the media processing task 61A of the server 52A, the media processing task 61B is activated and executed in the server 52B. The need for the task transition is determined on the basis of, for example, an event such as a change in the operating state of the server 52, a change in the network state, or a movement of the source 71 or the output destination 73. In a case of causing transition of the media processing task 61A of the server 52A, the workflow management service 51 notifies the media processing task 61A of the storage location information of internal state information to be continuously executed by the media processing task 61B of the server 52B. In the example of FIG. 2, state recovery information is stored in a persistent storage 62 included in the server 52A, and the workflow management service 51 notifies the persistent storage 62 of the server 52A as the storage location information of the internal state information. The persistent storage 62 is a storage unit capable of storing data of the media processing task 61 without depending on the execution state of the media processing task 61 such as activation and disappearance of a task, and includes a hard disk, a solid state drive (SSD), an erasable and programmable read only memory (EPROM), or the like. The internal state information stored in the persistent storage 62 is information necessary for recovering a state of temporarily interrupted processing, and is hereinafter also referred to as state recovery information, and is also referred to as a recovery object in the framework of the network-based media processing (NBMP) disclosed in Non-Patent Document 3. The media processing task 61A of the server 52A stores the state recovery information in the persistent storage 62 of the server 52A. Note that the persistent storage 62 in which the media processing task 61A of the server 52A stores the state recovery information may be the persistent storage 62 on the server 52 different from the server 52A that is executing the task. The media processing task 61B executed on the server 52B acquires the state recovery information from the persistent storage 62. Then, the media processing task 61B then uses the state recovery information to perform the same tasks as the media processing task 61A. The workflow management service 51 acquires a capability from each server 52, determines the need for the task transition, and selects the persistent storage 62 that stores the state recovery information. . . “).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s application to incorporate systems and methods to acquire capabilities of a plurality of servers that are possible transition destinations in a case where a media processing task executed in a first server that is one of the plurality of servers is caused to transition to a second server different from the first server, as taught by Takabayashi, into systems and methods of split rendering and pass-through compressed media formats for extended reality (XR) systems that perform the split rendering between a client and server, when XR streams are processed using split rendering by reviewing metrics that effect QOS for stream delivery and QOE for user experience as taught by the combination of Stockhammer and Stoica. Such incorporation provides a means to relocate the XR stream to a different split rendering server.
In regard to claim 2,the combination of Stockhammer, Stoica and Takabayashi teaches wherein the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter (see Stoica ¶ [0040] “ . . . Emerging applications such as augmented reality (“AR”)/virtual reality (“VR”)/extended reality (“XR”), cloud gaming (“CG”), device remote tele-operation (e.g., vehicle tele-operation, robot arms tele-operation etc.), 3D video conferencing, smart remote education, or the like are expected to drive increase in video traffic. Even though the foregoing applications may require different quantitative constraints and configurations in terms of rate, reliability, latency, and quality of service (“QoS”), it is expected that such constraint sets will challenge current and future communications networks in delivering a high-fidelity quality of experience (“QoE”) at ever increasing resolutions. . . .”;see Stoica ¶ ¶ [0071-0072] “ . . . he video application server 202 is used therefore to process, encode, transcode, and/or serve local 204 or remote 206 video content pertaining to an AR/XR/CG/tele-operation application session to the XR UE 203. The video application server 202 may, as a result, encode/transcode and control the video viewport content and transmit it in downlink to the RAN 208 based on UE specific parameters, configurations and sensing inputs that may affect the rendering perspective, rate, quality, panning, etc. This general architecture is expected to leverage the advantages of various compute and network domains (e.g., cloud, edge, smart handsets/headsets) to enable scalable AR/XR/CG/tele-operation applications and use cases with low-latency, high rate, and efficient energy usage. The architecture is as such universally applicable both to split rendering with asynchronous time warping devices, e.g., where the video application server encodes a rasterized pre-processed viewport representation to aid the UE, or to split rendering with viewport rendering at the device side, e.g., where the video viewport may be completely or partially rendered at the device side given the media encoded video content and its corresponding metadata available. In one embodiment, related to video coding domain, the interactivity involving these applications requires guarantees in terms of meeting packet error rate (“PER”) and packet delay budget (“PDB”) for the QoE of rendering the associated video streams at a UE. The video source jitter and wireless channel stochastic characteristics of mobile communications systems make the former challenging to meet especially for high-rate specific digital video transmissions, e.g., 4K, 3D video, 2×2K eye-buffered video, and/or the like. . . .”).
The motivation to combine the references is described for the rejection of claim 1 and is incorporated herein. Additionally Stoica evaluates QOS and QOE using various performance characteristics known in the art.
In regard to claim 4, the combination of Stockhammer, Stoica, and Takabayashi traches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: providing, to the split rendering client (e.g. output destination), an indication that a relocation procedure has been initiated (see Takabayashi ¶ [0063] “ . . . A media processing task 61 transitioned between the server 52A and the server 52B performs, for example, transcoding and segment generation of content videos. A media processing task 61A executed on the server 52A transcodes a video stream as a source 71 acquired via a media FIFO 72, generates segments, and transmits the segments to an output destination 73A. After transitioning to the server 52B, a media processing task 61B executed on the server 52B transcodes the video stream, which is the source 71 obtained via the media FIFO 72, generates the segment, and transmits the segment to an output destination 73B. The output destinations 73A and 73B may be one and the same device, for example, a terminal or the like, or may be different devices, for example, edge servers or the like. The segment is data obtained by converting a video stream into a file every several seconds to about 10 seconds . . .”).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides a manner of notifying the client device that the split rendering has been transitioned to a new server.
In regard to claim 5, the combination of Stockhammer, Stoica, and Takabayashi traches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: providing a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server (see Stockhammer ¶¶ [0052-0053] “ . . . The XR runtime subsystem 212 of the XR system 200 performs various runtime functions 216 such as pose tracking (e.g., of the pose of the XR system 200 and/or of the user), eye tracking (e.g., of the user's eyes), hand tracking (e.g., of the user's hands), body tracking (e.g., of the user's body), feature tracking (e.g., of features in the scene and/or the user), object tracking (e.g., of objects in the scene and/or parts of the user), face tracking (e.g., of the user's face and/or other people's faces in the scene), simultaneous localization and mapping (SLAM), or a combination thereof. Pose refers to location (e.g., longitude, latitude, altitude), orientation (e.g., pitch, yaw, roll), or a combination thereof. Pose can be tracked across 3 degrees of freedom (3DoF), 6 degrees of freedom (6DoF), or another range. The XR runtime subsystem 212 of the XR system 200 includes a visual composition subsystem 214 that couples to an eye buffer display 220, which may be a buffer for one or more display(s) (e.g., display(s) 340, display 440) directed toward the eye(s) of the user. The XR runtime subsystem 212 of the XR system 200 includes an audio subsystem 218 that receives audio from the microphone(s) 226 and/or that outputs audio to speaker(s) 224. The speaker(s) 224 can include loudspeakers, headphones, earbuds, other audio output devices, or combinations thereof. The XR system 200 includes an XR application 246 that can receive user input(s) 248 via an input interface of the XR system 200. The input(s) 248 can be passed to an XR runtime application programming interface (API) 202, an XR source management subsystem 244, a scene manager 238 and/or presentation engine 242, a media access function (MAF) API 254, and/or a network system 278. In some examples, the XR application 246 is a video game, or is associated with a video game, with the user input(s) 248 for instance including input(s) to the video game (e.g., controller inputs) that can impact what virtual content is rendered in in the XR content to be shown to the user (e.g., via the eye buffer display 220). . . .”)
In regard to claim 6, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: sending, to one of the current split rendering server or the new split rendering server, a request to send, to the split rendering client, an indication that the split rendering session will be relocated from the current split rendering server to the new split rendering server (see Takabayashi ¶ [0090] “ . . . the workflow management service 51 acquires the capabilities of the plurality of servers 52 as the possible transition destinations, and acquires in advance the data transfer speed (throughput) and the delay time (latency) between each of the servers 52 as the possible transition destinations and the persistent storage 62 in which the state recovery information is stored. Then, in view of the data capacity of the state recovery information a notification of which is given from the media processing task 61A being executed on the transition source server 52A, the workflow management service 51 instructs the transition destination server 52B to seamlessly continue the processing from the position of stop of the processing of the media processing task 61A of the transition source server 52A or to start the processing from an alternative start point without continuing the processing . . .”).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides the instruction to the servers for the relocation of the split-rendering.
In regard to claim 7, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: receiving, from the current split rendering server, first XR media content (e.g. processing task 61 A) for a first viewport associated with the XR media content stream (see Takabayashi Fig. 3,¶ [0083] “ . . . In step S12, the workflow management service 51 detects the need to cause the media processing task 61A being executed to transition to another server 52. The workflow management service 51 can detect the need for the task transition on the basis of, for example, a monitoring result of the state of the server 52, detection of physical movement of the source 71 (media input source) to the media processing task 61 or the output destination 73A. . . .”) ;
receiving, from the new split rendering server, second XR media content (e.g. processing task 61 B) for the first viewport associated with the XR media content stream (see Takabayashi Fig. 3,¶ ¶ [0084-0085] “ . . . In step S13, the workflow management service 51 acquires the capabilities of the plurality of servers 52 as possible transition destinations, and selects an appropriate server 52 on the basis of the acquired capabilities of the possible transition destinations. The selection of an appropriate server 52 takes into consideration the throughput and latency between the server and the storage included in the capability of the server 52. In the present embodiment, the server 52B is selected as the appropriate server 52. In step S14, the workflow management service 51 activates the same task as the media processing task 61A being executed in the transition source in advance on the selected server 52B. Thus, the media processing task 61B is activated on the server 52B. ; and
determining, based on the first XR media content and the second XR media content, to relocate the split rendering session from the current split rendering server to the new split rendering server (see Takabayashi Fig. 3,¶ [0088] “ . . . In step S17, the workflow management service 51 determines whether or not seamless processing continuation is possible in the transition destination server 52B on the basis of the throughput, the latency, and the like between the persistent storage 62 in which the state recovery information is stored and the transition destination server 52B. In a case where it is determined that the seamless processing continuation is possible, the workflow management service 51 gives a notification of the state recovery information storage location information indicating the storage location of the state recovery information, that is, the location of the persistent storage 62, and gives an instruction to continue the processing. On the other hand, in a case where it is determined that the seamless processing continuation is not possible, the workflow management service 51 gives a notification of the state recovery information storage location information, and gives an instruction not to perform the continuation processing. . . .”).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi enables workflow management to make a determination of which split rendering server should be optimal for the XR stream.
In regard to claim 8, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: indicating, to an application provider (AP) (e.g. media processing task) in the communication network, that the split rendering session will be relocated from the current split rendering server to the new split rendering server (see Takabayashi ,¶ ¶ [0063-0064] “ . . . A media processing task 61 transitioned between the server 52A and the server 52B performs, for example, transcoding and segment generation of content videos. A media processing task 61A executed on the server 52A transcodes a video stream as a source 71 acquired via a media FIFO 72, generates segments, and transmits the segments to an output destination 73A. After transitioning to the server 52B, a media processing task 61B executed on the server 52B transcodes the video stream, which is the source 71 obtained via the media FIFO 72, generates the segment, and transmits the segment to an output destination 73B. The output destinations 73A and 73B may be one and the same device, for example, a terminal or the like, or may be different devices, for example, edge servers or the like. The segment is data obtained by converting a video stream into a file every several seconds to about 10 seconds. The workflow management service 51 determines the need for the task transition, and causes the media processing task 61A being executed in the server 52A to transition to the server 52B as necessary. Instead of the media processing task 61A of the server 52A, the media processing task 61B is activated and executed in the server 52B. The need for the task transition is determined on the basis of, for example, an event such as a change in the operating state of the server 52, a change in the network state, or a movement of the source 71 or the output destination 73. . . “).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides a process for informing the application provider of the transition to a new split-rendering server.
In regard to claim 9, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: requesting that the new split rendering server perform at least a portion of XR media content rendering for the split rendering session (see Takabayashi ¶ ¶ [0056-0057] “ . . . in a case where the position of the camera 21 is moved to the position of a camera 21′ on the video generation side, the optimal edge server 32 is changed from the edge server 32A to the edge server 32B. In this case, it is necessary to cause the media processing such as compression encoding of the baseband video stream performed in the edge server 32A to transition to the edge server 32B. Furthermore, for example, in a case where the position of the terminal 22 is moved to the position of the terminal 22′ on the content receiving side, the optimal edge server 32 is changed from the edge server 32C to the edge server 32D. In that case, it is necessary to cause the media processing such as transcoding of the content video to transition from the edge server 32C to the edge server 32D . . .”).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides a means to immediately transition to a different split- rendering server.
In regard to claim 10, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: determining that the AP has caused the current split rendering server to perform at least the portion of the XR media content rendering for the split rendering session (see Takabayashi ¶ ¶ [0084-0085] “ . . . In step S13, the workflow management service 51 acquires the capabilities of the plurality of servers 52 as possible transition destinations, and selects an appropriate server 52 on the basis of the acquired capabilities of the possible transition destinations. The selection of an appropriate server 52 takes into consideration the throughput and latency between the server and the storage included in the capability of the server 52. In the present embodiment, the server 52B is selected as the appropriate server 52. In step S14, the workflow management service 51 activates the same task as the media processing task 61A being executed in the transition source in advance on the selected server 52B. Thus, the media processing task 61B is activated on the server 52B . . .”); and
providing, to the AP, an indication that the current split rendering server is to be un- provisioned (see Takabayashi ¶ ¶ [0086-0088] “ . . . In step S15, the workflow management service 51 instructs the media processing task 61A being executed (transition source) to store the state and stop. In step S16, the media processing task 61A of the server 52A stores the state recovery information (recovery object) necessary for continuously executing the processing in the persistent storage 62 designated in step S11, and notifies the workflow management service 51 of the stop of the processing and the data capacity of the state recovery information. In step S17, the workflow management service 51 determines whether or not seamless processing continuation is possible in the transition destination server 52B on the basis of the throughput, the latency, and the like between the persistent storage 62 in which the state recovery information is stored and the transition destination server 52B. In a case where it is determined that the seamless processing continuation is possible, the workflow management service 51 gives a notification of the state recovery information storage location information indicating the storage location of the state recovery information, that is, the location of the persistent storage 62, and gives an instruction to continue the processing. On the other hand, in a case where it is determined that the seamless processing continuation is not possible, the workflow management service 51 gives a notification of the state recovery information storage location information, and gives an instruction not to perform the continuation processing. . . .”).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides means to stop the old server from processing split-rendering of the stream in favor of the new server.
In regard to claim 11, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: determining one or more characteristics of the current and new split rendering servers (see Takabayashi ¶ ¶ [0067-0070] “ . . . The workflow management service 51 acquires a capability from each server 52, determines the need for the task transition, and selects the persistent storage 62 that stores the state recovery information. The capability is information indicating the performance and function of the server 52 itself, and for example, the following information (parameters) can be acquired as the capability. [0069] resource-availabilities: indicates the availability of resources of the server 52. Example: {“vcpu”, 4, 30}: there are four vcpu, and 30% of them is available. [0071] placement: geographic location of the server 52 [0072] location: location on the network of the server 52 (URL, IP address, or the like) [0073] functions: a list (array) of Function Description of functions executable by the server 52 [0074] connectivity: connection performance with another server 52 [0075] persistency-capability: whether or not the storage unit provided by the server 52 is persistent (held without depending on the state of the task) (true/false) [0076] secure-persistency: whether or not the data transfer of the persistent storage is secure (true/false) [0077] persistence-storage-url: location of persistent storage (URL) . . .”)
determining, based at least on the one or more characteristics of the current and new split rendering servers, a time for completion (e.g. delay time between each of the servers) of relocation of the split rendering session from the current split rendering server to the new split rendering server (see Takabayashi ¶ [0090] “ . . . the workflow management service 51 acquires the capabilities of the plurality of servers 52 as the possible transition destinations, and acquires in advance the data transfer speed (throughput) and the delay time (latency) between each of the servers 52 as the possible transition destinations and the persistent storage 62 in which the state recovery information is stored. Then, in view of the data capacity of the state recovery information a notification of which is given from the media processing task 61A being executed on the transition source server 52A, the workflow management service 51 instructs the transition destination server 52B to seamlessly continue the processing from the position of stop of the processing of the media processing task 61A of the transition source server 52A or to start the processing from an alternative start point without continuing the processing. . . .”) ; and
sending, to the current split rendering server and the new split rendering server, an indication that the new split rendering server is to relocate the split rendering session from the current split rendering server to the new split rendering server by the time for completion of relocation of the split rendering session (see Takabayashi Fig, 5, ¶¶ [0111-0116] “ . . . The lower table in FIG. 5 illustrates details of the forward and return objects indicating specific configuration examples of the items forward and return. Each of the forward and return objects has items of min-delay, max-throughput, and averaging-window. min-delay represents a minimum delay time of data transfer. The unit is millisecond. max-throughput represents a maximum rate of data transfer. The unit is bits per second (bits per second). averaging-window represents the length of an averaging window when the maximum speed of data transfer is calculated. The unit is microsecond. In step S55, the workflow management service 51 acquires the MPE Capabilities Description Document for the plurality of servers 52 as possible transition destinations, and acquires connection performance with another server 52 indicated by the connectivity parameter included in the MPE Capabilities Description Document. Then, the workflow management service 51 selects an appropriate transition destination on the basis of the acquired connectivity parameter. Here, the transition destination server 52B is selected as an appropriate transition destination. . . .”).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides a decision process for the transition to the new server.
In regard to claim 12, the combination of Stockhammer, Stoica, and Takabayashi teaches wherein the sending of the indication comprises sending an indication of the time for completion of relocation of the split rendering session relative to an XR runtime or relative to a relocation frame identifier indicating a particular frame from a sequence of frames in the XR media content stream by which time the relocation of the split rendering session is to be completed (see Takabayashi Fig, 8, ¶¶ [0147-0162] “ . . . FIG. 8 illustrates examples of scheme ID dependent data in a case where the media processing task indicated by the scheme ID is video encoding processing, and in a case where the media processing task is video encoding/transcoding processing and segment generation processing of video. In a case where the media processing task indicated by the scheme ID is video encoding processing, the following data is stored in the recovery object as scheme ID dependent data. alternative_input_reference: reference information of input data to be acquired next at the start of processing in a case where processing cannot be continued seamlessly [0149] encoding_structure: a picture reference structure of group of picture (GOP)/coded video sequence (cvs) at the time of encoding (for example, the number of the input order is rearranged in the output order) input_picture_reference: reference information (for example, pointer information for a FIFO buffer, such as the media FIFO 72) of input data to be acquired next at a time of continuing processing output_data_reference: reference information of the output data (for example, the sequence number in GOP/cvs of the output last picture/frame) num_reference_data Number of reference pictures necessary for continuing processing reference_data_offset[ ]: offset of each reference picture encoded_reference_data: data of each reference picture necessary for continuing the processing (the number of reference pictures) The data of each reference picture is indicated by a byte offset value from the head of the entire picture data. In a case where the media processing task indicated by the scheme ID includes segment generation processing, the following data is stored in the recovery object as scheme ID dependent data. alternative_input_reference: reference information of input data to be acquired next at the start of processing in a case where processing cannot be continued seamlessly encoding_structure: a picture reference structure of GOP (group of picture)/cvs (coded video sequence) of the segment (for example, the number of the input order is rearranged in the output order). input_data_reference: reference information (for example, pointer information for a FIFO buffer, such as the media FIFO 72) of input data to be acquired next at a time of continuing processing segment_header: segment header being generated (including moof box (Movie Fragment Box) of MP4) encoded_data: already-encoded data (the number of generated samples and sample data in a segment corresponding to mdat (Media Data Box)) The segment_header and the encoded_data correspond to processed data in the segment . . . “)
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1. Additionally, Takabayashi provides to indicate the frame of the XR data when the transition occurs.
In regard to claim 13, Stockhammer teaches A method (see ¶ [0002] “ . . . this application relates to systems and methods of split rendering and pass-through compressed media formats for extended reality (XR) systems . . .”) comprising:
monitoring, during a time period (see Fig. 11, ¶ [0139] as described for the rejection of claim 1 and is incorporated herein), by a real-time communication application function (RTC- AF) for a communication network(see Fig. 1, Fig. 2 ¶¶ [0049-0050] as described for the rejection of claim 1 and is incorporated herein) , one or more metrics associated with a split rendering session (see Fig. 2 , ¶ [0054] as described for the rejection of claim 1 and is incorporated herein), the split rendering session being provided by a current split rendering server (see Fig. 2 , ¶ [0054] as described for the rejection of claim 1 and is incorporated herein) ,
wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server via the communication network (see Fig. 7 ¶ [0098] as described for the rejection of claim 1 and is incorporated herein),
Stockhammer fails to explicitly teach,
However Stoica teaches
wherein the one or more metrics (see Fig. 12 ¶ [0206] as described for the rejection of claim 1 and is incorporated herein) comprise one or more of: a quality of service (QoS) metric, a quality of experience (QoE) metric, a kci performance indicator (KPI) for the current split rendering server, or a metric associated with the XR media content stream (see Fig.2 ¶ [0070], ¶ [0072], ¶ [0095] as described for the rejection of claim 1 and is incorporated herein) ,
The motivation to combine Stoica with Stockhammer is described for the rejection of claim 1 and is incorporated herein.
The combination of Stockhammer and Stoica fails to explicitly teach,
However Takabayashi teaches
determining, by the RTC-AF, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds (see Fig. 1,¶ ¶ [0055-0057] as described for the rejection of claim 1 and is incorporated herein) , to relocate the split rendering session from the current split rendering server to a new split rendering server (e.g. server 52B) (see Fig. 2,¶ ¶ [0061-0062] as described for the rejection of claim 1 and is incorporated herein) ; and
providing, by the RTC-AF, to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server (see ¶ ¶ [0064-0067] as described for the rejection of claim 1 and is incorporated herein).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1 and is incorporated herein.
In regard to claim 14, the combination of Stockhammer, Stoica and Takabayashi teaches wherein the QoS metric is selected from among: a packet loss rate, a bit rate, a bandwidth, a latency, a variance in latency, a throughput, a transmission delay, an availability, or a jitter (see Stoica ¶ [0040], ¶ ¶ [0071-0072] as described for the rejection of claim 2 and is incorporated herein).
The motivation to combine the references is described for the rejection of claim 2 and is incorporated herein.
In regard to claim 16, , the combination of Stockhammer, Stoica, and Takabayashi traches further comprising: providing, to the split rendering client (e.g. output destination), an indication that a relocation procedure has been initiated (see Takabayashi ¶ [0063] as described for the rejection of claim 4 and is incorporated herein).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 4.
In regard to claim 17, the combination of Stockhammer, Stoica, and Takabayashi traches further comprising: providing a request for the split rendering client to provide subsequent pose information and user input information to both the current split rendering server and the new split rendering server (see Stockhammer ¶¶ [0052-0053] as described for the rejection of claim 5 and is incorporated herein)
In regard to claim 18, the combination of Stockhammer, Stoica, and Takabayashi teaches further comprising: sending, by the RTC-AF, to one of the current split rendering server or the new split rendering server, a request to send, to the split rendering client, an indication that the split rendering session will be relocated from the current split rendering server to the new split rendering server (see Takabayashi ¶ [0090] as described for the rejection of claim 6 and is incorporated herein).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 6.
In regard to claim 19, the combination of Stockhammer, Stoica, and Takabayashi teaches further comprising: receiving, at the RTC-AF, from the current split rendering server, first XR media content (e.g. processing task 61 A) for a first viewport associated with the XR media content stream (see Takabayashi Fig. 3,¶ [0083] as described for the rejection of claim 7 and is incorporated herein) ;
receiving, at the RTC-AF, from the new split rendering server, second XR media content (e.g. processing task 61 B) for the first viewport associated with the XR media content stream (see Takabayashi Fig. 3,¶ ¶ [0084-0085] as described for the rejection of claim 7 and is incorporated herein) ; and
determining, by the RTC-AF, based on the first XR media content (e.g. processing task 61 A) and the second XR media content (e.g. processing task 61 B), to relocate the split rendering session from the current split rendering server to the new split rendering server (see Takabayashi Fig. 3,¶ [0088] as described for the rejection of claim 7 and is incorporated herein).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 6.
In regard to claim 20, Stockhammer teaches A non-transitory computer-readable medium storing instructions of a real-time communication application function that, when executed by at least one processor of an apparatus (see ¶ [0007] “ . . . a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processors to: receive sensor data from an XR interface device having at least one sensor; generate, based on the receipt of the sensor data from the XR interface device, processing instructions for an XR processing device to process the sensor data to generate XR content; send the sensor data and the processing instructions to the XR processing device; receive the XR content from the XR processing device; generate layer content; and send the XR content and the layer content to the XR interface device for the XR interface device to output the XR content and the layer content in a layered arrangement . . . .”) , cause an apparatus to perform:
monitoring, during a time period (see Fig. 11, ¶ [0139] as described for the rejection of claim 1 and is incorporated herein), one or more metrics associated with a split rendering session (see Fig. 2 , ¶ [0054] as described for the rejection of claim 1 and is incorporated herein), the split rendering session being provided by a current split rendering server (see Fig. 2 , ¶ [0054] as described for the rejection of claim 1 and is incorporated herein) ,
wherein the split rendering session is associated with an extended reality (XR) media content stream being provided to a split rendering client by the current split rendering server via the communication network (see Fig. 2 , ¶ [0054] as described for the rejection of claim 1 and is incorporated herein) ,
Stockhammer fails to explicitly teach,
However Stoica teaches
wherein the one or more metrics (see Fig. 12 ¶ [0206] as described for the rejection of claim 1 and is incorporated herein) comprise one or more of: a quality of service (QoS) metric, a quality of experience (QoE) metric, a key performance indicator (KPI) for the current split rendering server, or a metric associated with the XR media content stream (see Fig.2 ¶ [0070], ¶ [0072], ¶ [0095] as described for the rejection of claim 1 and is incorporated herein) ,
The motivation to combine Stoica with Stockhammer is described for the rejection of claim 1 and is incorporated herein.
The combination of Stockhammer and Stoica fails to explicitly teach,
However Takabayashi teaches
determining, based on a duration of occurrence of the one or more metrics exceeding a pre-determined duration threshold during the time period or a magnitude of the one or more metrics exceeding one or more respective pre-determined metric thresholds (see Fig. 1,¶ ¶ [0055-0057] as described for the rejection of claim 1 and is incorporated herein), to relocate the split rendering session from the current split rendering server to a new split rendering server (e.g. server 52B) (see Fig. 2,¶ ¶ [0061-0062] as described for the rejection of claim 1 and is incorporated herein) ; and
providing to one or more of the current split rendering server or the new split rendering server, an indication to relocate the split rendering session from the current split rendering server to the new split rendering server (see ¶ ¶ [0064-0067] as described for the rejection of claim 1 and is incorporated herein).
The motivation to combine Takabayashi with the combination of Stockhammer and Stoica is described for the rejection of claim 1 and is incorporated herein.
Claims 3 and 15 are rejected under 35 U.S.C. 103 as being un-patentable over Stockhammer et al. (U.S. 2024/0273829 A1; herein referred to as Stockhammer) in view of Stoica et al. (U.S. 2023/0199198 A1; herein referred to as Stoica) in further view of Takabayashi et al. (U.S. 2024/0054009 A1; herein referred to as Takabayashi) as applied to claims XX in further view of Bouazizi et al. (U.S. 2025/0024095 A1; herein referred to as Bouazizi)
In regard to claim 3, the combination of Stockhammer , Stoica , and Takabayashi fails to explicitly teach,
However Bouazizi teaches wherein the instructions stored on the at least one memory, when executed by the at least one processor, further cause the apparatus to perform: managing a new WebRTC session established between the split rendering client and the new split rendering server (see Bouazizi Fig. 3, ¶ [0084] “ . . . FIG. 3 is a call flow diagram illustrating an example of split rendering that may be performed by system 100 of FIG. 1. FIG. 3 includes an XR runtime, which may include a media application configured to present XR media data, as well as a media access function that may be executed by a streaming unit. In FIG. 3, the call flow includes creating a split rendering session (230), sending a description of split rendering output (232), and establishing transport connections (e.g., a WebRTC session) (234). Once this session has been established, during the session, XR client device 140 may receive and determine pose information and user actions (236) and transmit the pose information and user actions to XR server device 110 (238). XR server device 110 may then perform rendering for the requested pose (240) and send the rendered frame to XR client device 140 (242). XR client device 140 may decode and process the frame (244). XR client device 140 may then pass the raw frames to be displayed (246), and then compose and render the frame (248), which may include, per the techniques of this disclosure, modifying the rendered frame according to differences between the pose for which the frame was rendered as indicated by pose metadata and the actual user pose at the time the frame is to be presented. . . .”).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s application to incorporate systems and methods related to split rendering of extended reality (XR) media data, and In particular, when split rendering media data, two or more devices may be involved in rendering the media data, as taught by Bouazizi, into systems and methods of split rendering and pass-through compressed media formats for extended reality (XR) systems that perform the split rendering between a client and server, when XR streams are processed using split rendering by reviewing metrics that effect QOS for stream delivery and QOE for user experience, and upon a poor QOS or QOE, to acquire capabilities of a plurality of servers that are possible transition destinations in a case where a media processing task executed in a first server that is one of the plurality of servers is caused to transition to a second server different from the first server, as taught by the combination of Stockhammer, Stoica, and Takabayashi. Such incorporation enables a new WebRTC session to be established after the transition to the new server.
In regard to claim 15, the combination of Stockhammer , Stoica , Takabayashi and Bouazizi teaches and further comprising: managing, by the RTC-AF, a new WebRTC session established between the split rendering client and the new split rendering server (see Bouazizi Fig. 3, ¶ [0084] as described for the rejection of claim 3 and is incorporated herein).
The motivation to combine Bouazizi with the combination of Stockhammer , Stoica, and Takabayashi is described for the rejection of claim 3 and is incorporated herein.
Conclusion
There are prior art made of record which are not relied upon but are considered pertinent to applicant’s disclosure. They are listed on the PTO-892 accompanying this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES N FIORILLO whose telephone number is (571)272-9909. The examiner can normally be reached on 7:30 - 5 PM Mon - Fri..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John A. Follansbee can be reached on 571-272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES N FIORILLO/Examiner, Art Unit 2444