Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Objections
1. Claims 18-40, 41, 42 are objected to because of the following informalities: Claims 18, 41 and 42 are not falling within one of the four statutory categories of invention. Supreme Court precedent and recent Federal Circuit decisions indicate that a statutory "process" under 35 U.S.C. 101 must (1) be tied to another statutory category (such as a particular apparatus), or (2) transform underlying subject matter (such as an article or material) to a different state or thing. While the instant claim(s) recite a series of steps or acts to be performed, the claim(s) neither transform underlying subject matter nor positively tie to another statutory category that accomplishes the claimed method steps, and therefore do not qualify as a statutory process. Appropriate correction is required. Failure to make appropriate correction(s) would lead to 35 U.S.C. 101 rejection(s).
Claim 1 recites “An audio decoder for providing a decode audio representation on the basis of an encoded audio representation,“ should be - An audio decoder for providing a decode audio representation on the basis of an encoded audio representation comprising: - . Appropriate correction(s) required.
Claim 18 recites “An apparatus for providing an encoded audio representation,“ should be - An apparatus for providing an encoded audio representation comprising: - . Appropriate correction(s) required.
Claim 41 recites “A method for providing a decoded audio representation on the basis of an encoded audio representation,“ should be - A method for providing a decoded audio representation on the basis of an encoded audio representation comprising: - . Appropriate correction(s) required.
Claim 42 recites “A method for providing an encoded audio representation,“ should be - A method for providing an encoded audio representation comprising: - . Appropriate correction(s) required.
Claim 46 recites “An audio decoder for providing a decode audio representation on the basis of an encoded audio representation,“ should be - An audio decoder for providing a decode audio representation on the basis of an encoded audio representation comprising: - . Appropriate correction(s) required.
Claim Rejections - 35 USC § 103
2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. Claims 1-4, 7-12, 17-25, 29, 33-35, 41-44, 46 are rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1, FR 64 pages) in view of D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages).
As to claim 1, Mate teaches an audio decoder (Fig. 1, MPEG-H, 3DA Decoder) for providing a decoded audio representation on the basis of an encoded audio representation (Fig. 2 and page 9, lines 16-18 – The EID 200 together with the audio data (audio signals, SOFA files, etc.) may be processed by the audio encoder 202 to generate the bitstream 204):
wherein the audio decoder is configured to spatially render one or more audio signals (page 1, lines 13-15 - Bitstream content is data which has been created by encoding the 6DOF audio scene description, the raw audio signals and the MPEG-H encoded/decoded audio signals; page 11, lines 10-16 - new MHASampleEntry may be defined to indicate 6DoF rendering related metadata for MPEG-H 3D Audio files and the whole description is directed to the 6DoF rendering in AR/VR application based on the MPEG-I Audio / MPEG-H 3D audio which implies the spatial rendering; page 26, lines 22-30 - determining a spatial audio flag value in the dynamic content, and selecting to: when the spatial audio flag value is false, rendered dynamic content communication audio without any further acoustic modelling, or when the spatial audio flag value is true, render dynamic content communication audio with acoustic modelling according to the information in the bitstream; Fig. 1 - auralization);
wherein the audio decoder is configured to receive the plurality of packets of different packet types (page 11, lines 10-16 and page 13, lines 17-19 - the renderer may receive dynamic updates via a dynamic ingestion interface or as a new type of MPEG-H Audio Stream (MHAS) packet implies that plurality of packets types; Fig. 1, MPEG-I Audio Bitstream, Common 6DoF Metadata);
the packets comprising one or more scene configuration packets providing a renderer configuration information defining a usage of scene objects and/or a usage of scene characteristics (page 23, lines 3-4 - position of a real world object or scene orientation changes during content consumption; page 11, lines 10-16 - the MPEG-H 3D Audio configuration may include 6DOF metadata capable packets which may change at arbitrary positions of the stream; page 25, lines 12-21 – scene description; page 12, line 16 to page 13, line 10);
the packets comprising one or more scene update packets, wherein the scene update packets define an update of scene metadata for the rendering (Fig. 1, dynamic updates; page 11, lines 10-16; page 5, line 19 – page 7, lines 24-26 – modification metadata, dynamic scene updates; page 17, line 7 – page 19, line 25 - The currently specified updates may be done based on a predetermined timestamp, condition-based update (e.g., location-based trigger) and explicit user interaction (e.g., turn on the radio); the table bridging pages 17-18 defines the condition as part of the update);
the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics (Fig. 1, Common 6DoF metadata; page 10, lines 3-5 - the adapted content could be reverberation characteristics of the audio scene (RT60 values or the audio scene dimensions; page 11, lines 11-12 – 6DoF metadata capable packets; page 25, lines 12-21 – acoustic properties and acoustic environment information; page 12, line 16 to page 13, line 10).
Mate does not explicitly discuss the audio decoder is configured to select definition of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information; wherein the audio decoder is configured to update one or more scene metadata in dependence on a content of the one or more scene update packets.
D2 teaches the audio decoder is configured to select (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) definition of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information (page 14, presets); wherein the audio decoder is configured to update one or more scene metadata in dependence on a content of the one or more scene update packets (page 14 – the user may change certain aspects of the rendered audio scene during playback, e.g., change the level or position of an audio object; page 15 – MHAS packet of type pactyp-userinteraction).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of D2 into the teachings of Mate for the purpose of providing technical implementation of the transmission of the metadata updates associated with the update conditions.
As to claims 2 and 21, D2 teaches the audio decoder according claim 1 and the apparatus according to claim 18, wherein the audio decoder is configured to determine a rendering configuration on the basis of a scene configuration packet and the audio decoder determine an update of the rendering configuration on the basis of one or more scene update packets (page 14 – configuration change…changing the level or position of an audio object, changing audio elements in terms of gain or position; changing a preset – dialogue enhancement with increased dialogue signal level and attenuated background signal level; enhance ambience, an additional ambience object signal, and a muted dialogue object).
As to claim 3, Mate teaches the audio decoder according to claim 1, wherein the one or more scene update packets comprise an enumeration of scene metadata items to be changed, wherein the enumeration comprises for one or more metadata items to be changed, a metadata identifier and a metadata update value (pages 17-18 – EIF updating via dynamic content based on a predetermined timestamp, condition based update such as location based trigger; EIF update details include list of changes, an identifier and the value of the changes).
As to claim 4, Mate teaches the audio decoder according to claim 1, wherein the audio decoder acquires definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics from the one or more scene payload packets (page 25, lines 12-21 – the received audio content in the bitstream comprises: audio data, scene description of the audio scene, acoustic environment information such as reflecting surfaces, acoustic properties such as RT60, direct to reverberation ration, etc., content creator intent, EIF).
As to claims 7, 29, and 33, Mate teaches the audio decoder according to claim 1 and the apparatus according claim 18, wherein the one or more of the scene update packets define a condition for a scene update and the audio encoder evaluates whether the condition for the scene update defined in a scene update packet is fulfilled, to decide whether the scene update should be made (page 17, lines 9-13 – specified updated may be done based on a predetermine timestamp, condition based update such as location based trigger and explicit user interaction to turn on the radio).
As to claims 8 and 34, Mate teaches the audio decoder according to claim 1 and the apparatus according to claim 18, wherein the one or more of the scene update packets define an interactive trigger condition and the audio decoder evaluates whether the scene update should be made (page 17, lines 9-13 – specified updated may be done based on a predetermine timestamp, condition based update such as location based trigger and explicit user interaction to turn on the radio).
As to claims 9 and 22, Mate teaches the audio decoder according to claim 1 and the apparatus according to claim 18, wherein the one or more of the scene configuration packets and the one or more scene update packets and scene payload packets are conformant to a MPEG-H MHAS packet definition (claim 3; page 13, lines 17-19 – the renderer receives dynamic updates via a dynamic ingestion interface or as a new type of MPEG-H Audio Stream MHAS packet).
As to claims 10 and 23, Mate teaches the audio decoder according to claim 1 and the apparatus according to claim 18, wherein the one or more of the scene configuration packets and the one or more scene update packets and scene payload packets each comprise a packet type identifier, a packet label, a packet length information and a packet payload (page 13, lines 17 through page 14 - updates via a dynamic ingestion interface or as a new type of MPEG-H Audio Stream (MHAS) packet. The updates may include the position of the anchor object and/or the positions of surfaces (walls, floor, ceiling etc.) in the current user environment: 1) an audio scene in the bitstream, 2) rendering instructions for dynamic updates also in the bitstream, and 3) a dynamic update at rendering time. Based on these, the renderer 206 shown in Fig. 2 may perform the following in the association and modification block 208 to perform the 6DOF rendering adaptation:
1) obtain AudioScene and rendering instructions from the bitstream
2) obtain dynamic update with an "anchor object position information" and its identifier as shown in Fig. 4.
3) associate the dynamic update with the AnchorObject that was defined in the bitstream using the identifier
4)modify the position of the AnchorObject based on the "anchor object position information".• This may, in turn, cause the modification of the positions of all AudioElements whose positions are defined relative to the AnchorObject
5)modify rendering, if necessary, based on the rendering instructions in the bitstream).
As to claims 11 and 24, Mate teaches the audio decoder according to claim 1 and the apparatus according to claim 18, wherein the audio decoder is configured to extract the one or more scene configuration packets, scene update packets and scene payload packets from a bitstream comprising a plurality of MPEG-H packets, including packets representing one or more audio channels to be rendered (page 1, lines 13-15 – bitstream content data which has been created by encoding the 6DOF audio scene description, the raw audio signals and the MPEG-H encoded/decoded audio signals).
As to claims 12 and 25, Mate teaches the audio decoder according to claim 1 and the apparatus according to claim 18, wherein the audio decoder is configured to receive the one or more scene configurations packets via a broadcast stream (page 11, lines 10-14 – 6DoF streaming or broadcast environments based on, for example, MPEG-DASH or MPEG-H MMT, the MPEG-H 3D Audio configuration includes 6DOF metadata capable packets which may change at arbitrary positions of the stream, and not necessarily only on fragment boundaries).
Claim 17 is rejected for the same reasons discussed above with respect to claim 7.
As to claim 18, Mate teaches an apparatus for providing an encoded audio representation (Fig. 2 and page 9, lines 16-18 – The EID 200 together with the audio data (audio signals, SOFA files, etc.) may be processed by the audio encoder 202 to generate the bitstream 204);
wherein the apparatus is configured to provide an information for a spatially rendering one or more audio signals (page 1, lines 13-15 - Bitstream content is data which has been created by encoding the 6DOF audio scene description, the raw audio signals and the MPEG-H encoded/decoded audio signals; page 27, line 22 through page 28, line 11 - the audio data in the bitstream content may be MPEG-H encoded audio data for example, and the audio data in the dynamic content, on the other hand, may be a low latency encoded content (such as AMR, EVS, IVAS, etc.) for example. An example embodiment may be provided with a method comprising: receiving a bitstream which comprises recorded audio content and at least one instruction for management of dynamic content; receiving dynamic content separate independent from the bitstream, where the dynamic content comprises dynamic audio content; and rendering audio with a renderer based upon the recorded audio content of the bitstream, the received dynamic content, and the at least one instruction in the bitstream for management of the dynamic content; page 11, lines 10-16 - new MHASampleEntry may be defined to indicate 6DoF rendering related metadata for MPEG-H 3D Audio files and the whole description is directed to the 6DoF rendering in AR/VR application based on the MPEG-I Audio / MPEG-H 3D audio which implies the spatial rendering; page 26, lines 22-30 - determining a spatial audio flag value in the dynamic content, and selecting to: when the spatial audio flag value is false, rendered dynamic content communication audio without any further acoustic modelling, or when the spatial audio flag value is true, render dynamic content communication audio with acoustic modelling according to the information in the bitstream);
wherein the apparatus is configured to receive the plurality of packets of different packet types (page 11, lines 10-16 and page 13, lines 17-19 - the renderer may receive dynamic updates via a dynamic ingestion interface or as a new type of MPEG-H Audio Stream (MHAS) packet implies that plurality of packets types; Fig. 1, MPEG-I Audio Bitstream, Common 6DoF Metadata);
the packets comprising one or more scene configuration packets providing a renderer configuration information defining a usage of scene objects and/or a usage of scene characteristics (page 23, lines 3-4 - position of a real world object or scene orientation changes during content consumption; page 11, lines 10-16 - the MPEG-H 3D Audio configuration may include 6DOF metadata capable packets which may change at arbitrary positions of the stream; page 25, lines 12-21 – scene description; page 12, line 16 to page 13, line 10);
the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of scene characteristics (Fig. 1, Common 6DoF metadata; page 11, lines 11-12 – 6DoF metadata capable packets; page 25, lines 12-21 – acoustic properties and acoustic environment information; page 12, line 16 to page 13, line 10).
Mate does not explicitly teach the packets comprising one or more scene update packets defining an update of scene metadata for the rendering.
D2 teaches the packets comprising one or more scene update one or more scene metadata for the rendering (page 14 – the user may change certain aspects of the rendered audio scene during playback, e.g., change the level or position of an audio object; page 15 – MHAS packet of type pactyp-userinteraction).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of D2 into the teachings of Mate for the purpose of providing technical implementation of the transmission of the metadata updates associated with the update conditions.
As to claim 19, D2 teaches the audio decoder is configured to select (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) definition of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information (page 14, presets).
As to claim 20, D2 teaches the audio decoder is configured to provide one or more update packets that a content of the one or more scene update packets defines an update one or more scene metadata (page 14 – the user may change certain aspects of the rendered audio scene during playback, e.g., change the level or position of an audio object; page 15 – MHAS packet of type pactyp-userinteraction); and Mate teaches these in page 13, lines 17-19 as a new type of MPEG-H Audio Stream MHAS packet).
As to claim 35, D2 teaches the apparatus according claim 18, wherein the apparatus is configured to adapt selecting (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) of definition of one or more scene objects and/or definitions of one or more scene characteristics, in the scene payload packets in dependence on when and/or where needed by a renderer (page 14, presets); and (page 9) a random access point is a sync sample in the ISOBMFF and consist of the following MHAS packets in the following order: PACTYP_MPEGH3DACFG, PACTYP_AUDIOSCENEINFO (if Audio Scene Information is present), PACTYP_BUFFERINFO, PACTY_MPEGH3DAFRAME. It would have been obvious to order definitions of scene objects and scene characteristics in order to carry out Presets within a MPEG_H audio stream.
Claims 41 and 46 are rejected for the same reasons discussed above with respect to claim 1.
Claim 42 is rejected for the same reasons discussed above with respect to claim 18.
Claim 43 is rejected for the same reasons discussed above with respect to claim 41. Furthermore, Mate teaches non-transitory digital storage medium having stored there on a computer program (page 34, lines 8-11; page 35, lines 29-32; claims 11 and 14).
Claim 44 is rejected for the same reasons discussed above with respect to claim 42. Furthermore, Mate teaches non-transitory digital storage medium having stored there on a computer program (page 34, lines 8-11; page 35, lines 29-32; claims 11 and 14).
4. Claims 5-6, 40 are rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of D3 (MPEG-I Immersive Audio Encoder Input Format, 2021-04-30, 35 pages).
As to claim 5, Mate and D2 do not explicitly discuss the audio decoder according to claim 1, wherein the one or more scene payload packets comprise an enumeration of payloads defining scene objects and/or scene characteristics and wherein the audio decoder evaluate the enumeration of payloads defining scene objects and/or scene characteristics.
D3 teaches (page 24 – a scene can contain geometric elements to specify the spatial extent of object sources, e.g., source with width, describe acoustic elements, e.g., occlusion, diffraction and reflection, spatially decompose a scene into sub scenes).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of D3 into the teachings of Mate and D2 for the purpose of enabling an elegant solution for aspects beyond merely room models.
As to claim 6, D3 teaches the audio decoder according to claim 1, wherein a payload identifier is associated with the payloads within a scene payload packet and the audio decoder evaluates the payload identifier in order to decide whether the given payload should be used for rendering (page 24 – a scene can contain geometric elements to specify the spatial extent of object sources, e.g., source with width, describe acoustic elements, e.g., occlusion, diffraction and reflection, spatially decompose a scene into sub scenes, e.g. audibility ranges of audio elements) and it would have been obvious using the context of sub scenes for spatially decompose a scene into sub scenes and audibility ranges of audio elements.
As to claim 40, D3 teaches the apparatus according to claim 18, wherein the apparatus is configured to provide the scene configuration packets in order to decompose a scene into a plurality of spatial regions in which different rendering metadata is valid (page 24 – Geometric information is fundamental for a 6DoF scene in order to realize spatially-dependent room acoustic effects, particularly diffraction and occlusion. A scene can contain geometric elements to specify the spatial extent of object sources, e.g., source with width, describe acoustic elements, e.g., occlusion, diffraction and reflection, spatially decompose a scene into sub scenes, e.g. audibility ranges of audio elements; page 28 – acoustic conditions within the entire scene or a certain spatial zone by means of room acoustic parameters).
5. Claims 13-14, 16 and 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of Kim et al. (WO 2008/084965 A1).
As to claim 13, Mate and D2 do not explicitly discuss the audio decoder according to claim 1, wherein the audio decoder is configured to request the one or more scene payload packets from a packet provider.
Kim teaches the service provider packages the various contents provided from the content provider into a service and provides the packaged service and the network provider provides a network for provision of the packaged service to the user ([0052]); an IPTV service provider in provides a broadcast content provided to the user and the user can watch the content provide by inputting the region code ([0067]).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Kim into the teachings of Mate and D2 for the purpose of providing a home network end user to receive the service requested.
As to claims 14 and 27, Kim teaches the audio decoder according to claim 1 and the apparatus according claim 18, wherein the audio decoder is configured to request the one or more scene payload packets from a packet provider using a packet ID (abstract; [0067] - an IPTV service provider in provides a broadcast content provided to the user and the user can watch the content provide by inputting the region code; [0194-0195] – the name field includes the service provider’s domain name; specific information contained in the pay load ID of the first push has the value of 5…).
As to claim 16 and 28, Kim teaches the audio decoder according to claim 1 and the apparatus according claim 18, wherein the audio decoder is configured to provide an information indicating which one or more scene payload packets are required, or will be required within a predetermined period of time to a packet provider ([0195] – Specific information contained in the "Pay load Id" of the first "Push" has the value of 5, and this information is indicative of a package discovery record including the above-mentioned ID information. The above- mentioned "Payload Id" includes ID information of a segment including the package discovery record, and its version information; [0196] - Specific information contained in the "Payload Id" of the second "Push" has the value of 5, and this information is indicative of a broadcast discovery record including the above-mentioned ID information. The above-mentioned "Payload Id" includes ID information of a segment including the broadcast discovery record, and its version information).
As to claim 26, Kim teaches the apparatus according claim 18, wherein the apparatus is configured to provide the one or more scene payload packets in response to a request from an audio decoder ([0219] - receives service information of the selected channel from the service information decoder 1510 and performs setting of an audio/video Packet Identifier (PID) of the selected channel in the demultiplexer 1522, etc. based on the received service information).
6. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of Goodings et al. (US Patent 7,020,119).
As to claim 15, Mate and D2 do not explicitly discuss the audio decoder according to claim 1, wherein the audio decoder is configured to anticipate which one or more data structures will be required, or are expected to be required, and to request the one or more data structures, or one or more scene payload packets comprising said one or more data structures, before the data structures are actually required.
Goodings teaches data packet payload 400 of Fig. 4 is based upon a DM3 link. A delay comprised of at least approximately the period of speech contained in one packet (required for buffering of the raw audio data before compression, transmission, decompression and playback) is imposed upon the communications channel (col. 7, lines 6-28).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Goodings into the teachings of Mate and D2 for the purpose of avoiding delay in the communications channel may result in undesirable audible characteristics.
6. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of JP 4099973 B2.
As to claim 30, Mate and D2 do not explicitly discuss the apparatus according to claim 18, wherein the apparatus is configured to repeat a provision of the scene configuration packet periodically.
JP 4099973 teaches Data is generated and transmitted to the video receiver 102. The provisional scene metadata is continuous with the confirmed scene metadata with the scene number “100” transmitted immediately before, “scene number” is “101”, “scene confirmation information” is “provisional” ([0134]); the scene metadata acquired in step S601 is combined in time series ([0184]) and the scene metadata includes provisional/ confirmed scene metadata, in this step, when the confirmed scene metadata having the same scene number as the provisional scene metadata is connected, the provisional scene metadata is included ([0185]) and the process returns to step S601 to repeat the acquisition and combination of the scene metadata ([0187]).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of JP 4099973 into the teachings of Mate and D2 for the purpose of repeating the acquisition and combination of the scene metadata.
7. Claims 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of DE 69907829.
As to claim 31, Mate and D2 do not explicitly discuss the apparatus according to claim 18, wherein the apparatus is configured to provide the scene configuration packet such that the scene configuration packet defines which scene payload packets are required at a given point in space and time.
DE 69907829 teaches selecting only sections to be recorded using video index information 100 so that only required sections are recorded when the video information is actually broadcast (description of the preferred embodiments, 40th paragraph); and required scenes can be recovered not only by retrieval titles, but also by retrieving information relating to the content of video information, such as "a scene in which ... appears and speaks to ...", "an image which contains a scene similar to this one "," a scene using this music "or the like (description of the preferred embodiments, 41th paragraph)
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of DE 69907829 into the teachings of Mate and D2 for the purpose of only required sections are recorded when the video information is actually broadcast.
As to claim 32, Mate and D2 do not explicitly discuss the apparatus according to claim 18, wherein the apparatus is configured to provide the scene configuration packet such that the scene configuration packet defines where scene payload packets can be retrieved from.
DE 69907829 teaches the audio index information according to the present invention uses a structural element object retrieval information, to regain information, to reproduce the content of sounds or to regain sound, directly or indirectly through the structural element are managed, and being the segment information the packet information handles other segment information to handle or manage that are prepared or created by tones or sound in the same area as that of the sound information is handled or managed by the segment information and, in the tree structure, packet information in addition to Sound information assigned or assigned under a segment information where recovery conditions are entered for a desired scene and the structural element object, which includes retrieval information, which the recovery conditions are met, is identified, by retrieving audio index information and a list for identified Structural entity objects appear as a result of retrieval output (Embodiments Of The Invention, 18th paragraph).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of DE 69907829 into the teachings of Mate and D2 for the purpose of having retrieval information conditions met and identified by retrieving audio index information and a list for identified Structural entity objects appear as a result of retrieval output.
8. Claim 36 is rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of Xie et al. (CN 102625106 A).
As to claim 36, D2 teaches the apparatus according claim 18, wherein the apparatus is configured to adapt selecting (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) of definition of one or more scene objects and/or definitions of one or more scene characteristics, in the scene payload packets in dependence on when and/or where needed by a renderer (page 14, presets); and (page 9) a random access point is a sync sample in the ISOBMFF and consist of the following MHAS packets in the following order: PACTYP_MPEGH3DACFG, PACTYP_AUDIOSCENEINFO (if Audio Scene Information is present), PACTYP_BUFFERINFO, PACTY_MPEGH3DAFRAME. It would have been obvious to order definitions of scene objects and scene characteristics in order to carry out Presets within a MPEG_H audio stream. Mate and D2 do not explicitly discuss definition of one or more scene characteristics in the scene payload packets in dependence on an importance of the definitions of one or more of the scene objects and/or scene characteristics for a renderer.
Xie teaches selecting the fixing quality factor and video buffer verification mode (CRF + VBV) scheme; in slow moving scene, the human operation for the computer is usually very slow and often has pause for a long time, the definition of the video is often more important than fluency; core idea of the FRA-CQP mode is the target as cost to reduce the frame rate to the code rate control and definition of each frame data; the mode x264 in the encoder to start multiple slices (slice) option and clicking, the one frame of data is encoded into a series of slices and the slice order into slice groups… completely output to start a new frame data of coding of the current frame encoded as a series of slices encoded according to remaining budget, then the slice is divided into slice groups, dividing method is as follows: a blade assembly comprising a plurality of ordered slice and slice set size must be controlled within the budget, then output the first fragment set ([0051]).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Xie into the teachings of Mate and D2 for the purpose of ensuring the full degree of considering the buffer area on the basis of video quality is basically stable by adjusting quantization parameter to code rate control, the code rate is not higher than the peak code rate and the screen of violent motion can be smoothly played.
9. Claim 37 is rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of Wei (CN 106257958 A).
As to claim 37, D2 teaches the apparatus according claim 18, wherein the apparatus is configured to adapt selecting (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) of definition of one or more scene objects and/or definitions of one or more scene characteristics, in the scene payload packets in dependence on when and/or where needed by a renderer (page 14, presets); and (page 9) a random access point is a sync sample in the ISOBMFF and consist of the following MHAS packets in the following order: PACTYP_MPEGH3DACFG, PACTYP_AUDIOSCENEINFO (if Audio Scene Information is present), PACTYP_BUFFERINFO, PACTY_MPEGH3DAFRAME. It would have been obvious to order definitions of scene objects and scene characteristics in order to carry out Presets within a MPEG_H audio stream. Mate and D2 do not explicitly discuss definitions of one or more of the scene objects and/or of definition of one or more scene characteristics in the scene payload packets in dependence on a packet size limitation.
Wei teaches message packet to be sent is as follows: trigger sent by the same event, including any possible for description information of the message of the event packet; in the V2V scene, information describing the event comprises any measurement data of each node device can acquire, for example, the moving speed of the node device, position, direction and brake state, movement track, road conditions, and the like. The technical solution of each embodiment of the invention according to the measured data in the message to be sent comprises at least divided into core message packet and non-core message packet for importance degree in the described triggering event… the core message packet may have a fixed size, so that the message recipient node device can more quickly and accurately for the core part of demodulation, to quickly perform the reaction; the fixed size can be pre-ruled, for example, has been defined in the Physical Layer specification (Preferred Embodiment, 3rd paragraph).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Wei into the teachings of Mate and D2 for the purpose of having the message recipient node device can more quickly and accurately for the core part of demodulation, to quickly perform the reaction.
10. Claim 38 is rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of Kalsi (WO 2009051988 A1).
As to claim 38, Mate and D2 do not explicitly discuss the apparatus according to claim 18, wherein the apparatus is configured to provide payload packets comprising a comparatively low level of detail first and to provide payload packets comprising a comparatively higher level of detail later on.
Kalsi teaches the Knowledge Marketplace System is preferably built to support both textual and audio /video content ([00149]); an Internet-based on-demand virtual Knowledge Marketplace System for professionals implemented via a Web-based computer and communication network is provided. The system includes: a host computer network; a communications network linking a plurality of remote knowledge consumer Internet access devices to the host computer network; at least one database accessible to the host computer network and storing a plurality of pre-packaged knowledge content information packets browsable via the remote Internet access devices through a Web portal, the information packets being uploaded to the system by a plurality of knowledge produce ([0049]); The Knowledge Center A216 provides "Knowledge" content or information of a type which may be classified and described as short task or project specific responses or answers (i.e. information) with a low level of detail relevant to a specific search or query A213 input into the Knowledge Marketplace System by the Knowledge Consumer ([00106]); and The Solutions Center A214 provides standardized "Solutions" content or information of a type which may be classified and described as a professional service solution knowledge content or information packets that provide a comparatively lengthier and more in-depth detailed response to the Knowledge Consumer having a greater or higher level of detail than the Knowledge Center A214 on a particular topic. Accordingly, the volume of information contained in a service solution information packet is greater than the "short answer" information packets associated with a Knowledge Category (Knowledge Center A214). Solution Center A214 preferably contains the "Productized Service Solutions" already described above and offers detailed pre-packaged standard service solutions (e.g. step-by-step instructions) on a particular topic having a topic-specific defined scope and "Service Attributes." ([00107]).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Kalsi into the teachings of Mate and D2 for the purpose of providing quick "know-how" knowledge or "short answers" to the professional Knowledge Consumer to allow them to resolve a specific problem at hand that very limited in scope and not require much detailed information and providing comparatively lengthier and more in-depth detailed response to the Knowledge Consumer having a greater or higher level of detail than the Knowledge Center A214 on a particular topic later on.
11. Claim 39 is rejected under 35 U.S.C. 103 as being unpatentable over submitted prior arts Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages) in view of Osawa (JP 2001189713 A).
As to claim 39, Mate and D2 do not explicitly discuss the apparatus according to claim 18, wherein the apparatus is configured to separate definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics into a plurality of scene payload packets, and provide the different scene payload packets at different times.
Osawa teaches the data received via the data transmission path 8 is supplied to a data separation unit 4, which separates the multiplexed reception data and outputs encoded data for each object; And the scene description. Then, the coded data for each object is transferred to the object decoding unit 5, and the scene description information is transferred to the scene description decoding unit 6 ([0005]); the packet separating section 42 separates the received data into data of each stream and a scene description with reference to the information of the 'multiplexed header' portion of the packet. Then, the data is passed to the error correction decoding unit 41. The error correction decoding unit 41 extracts the priority information of each stream from the scene description, and determines the number of repetition ([0080]).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Osawa into the teachings of Mate and D2 for the purpose of separating the received data into data of each stream and a scene description before passing the data to the error correction for decoding.
Double Patenting
12. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-44 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-21 of copending Application No. 18/659927 (reference application) in view of Mate et al. (WO 2021/186104 A1) and D2 (ATSC STANDARD: A/342 Part 3, MPEG-H System,11 March 2021, 22 pages). Although the claims at issue are not identical, they are not patentably distinct from each other because all the claimed limitations recited in the present application are broader and transparently found in the copending Application No. 18/659927 with obvious wording variations. . When claims in the pending application are broader than the ones in the patent, the broad claims in the pending application are rejected under obviousness type double patenting over previously patented narrow claims, In re Van Ornum and Stang, 214 USPQ 761. Also, omission of an element and its function in a combination is an obvious expedient if the remaining elements perform the same functions as before. In re KARLSON (CCPA) 136 USPA 184 (1963).
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
U.S. Patent Application 18/659,947
U.S. Patent Application 18/659,927
1.An audio decoder, for providing a decoded audio representation,
1.An audio decoder, for providing a decoded audio representation on the basis of an encoded audio representation included in a bitstream, the bitstream comprising a plurality of packets of different packet types,
wherein the audio decoder is configured to spatially render one or more audio signals;
wherein the audio decoder is configured to spatially render one or more audio signals;
wherein the audio decoder is configured to receive the plurality of packets of different packet types,
wherein the audio decoder is configured to receive the plurality of packets of different packet types,
the packets comprising one or more scene configuration packets providing a renderer configuration information defining a usage of scene objects and/or a usage of scene characteristics,
the packets comprising one or more scene configuration packets providing a renderer configuration information,
the packets comprising one or more scene update packets defining an update of scene metadata for the rendering,
the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristic,
the packets comprising one or more scene update packets, wherein the scene update packets define an update of scene metadata for the rendering and comprise a representation of one or more update conditions;
wherein the audio decoder is configured to select definitions of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information; and
wherein the audio decoder is configured to evaluate whether the one or more update conditions are fulfilled and to selectively update one or more scene metadata in dependence on a content of the one or more scene update packets if the one or more update conditions are fulfilled;
wherein the audio decoder is configured to update one or more scene metadata in dependence on a content of the one or more scene update packets.
wherein the content of the one or more scene update packets defines a change of one or more metadata values for the rendering.
18. An apparatus for providing an encoded audio representation, wherein the apparatus is configured to provide an information for a spatial rendering of one or more audio signals;
wherein the apparatus is configured to provide the plurality of packets of different packet types,
the packets comprising one or more scene configuration packets providing a renderer configuration information defining a usage of scene objects and/or a usage of scene characteristics,
the packets comprising one or more scene update packets defining a update of scene metadata for the rendering,
the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics.
17. An apparatus for providing an encoded audio representation in a bitstream comprising a plurality of packets of different packet types,
wherein the apparatus is configured to provide an information for a spatial rendering of one or more audio signals;
wherein the apparatus is configured to provide the plurality of packets of different packet types,
the packets comprising one or more scene configuration packets providing a renderer configuration information,
the packets comprising one or more scene update packets wherein the scene update packets define an update of scene metadata for the rendering and comprise a representation of one or more update conditions,
wherein a content of the one or more scene update packets defines a change of one or more metadata values for the rendering.
41. A method for providing a decoded audio representation on the basis of an encoded audio representation,
wherein the method comprises spatially rendering one or more audio signals;
wherein the method comprises receiving the plurality of packets of different packet types;
the packets comprising one or more scene configuration packets providing a renderer configuration information defining a usage of scene objects and/or a usage of scene characteristics,
the packets comprising one or more scene update packets defining a update of scene metadata for the rendering,
the packets comprising one or more scene payload packets comprising definitions of one or more scene objects and/or definitions of one or more of the scene characteristics;
wherein the method comprises selecting definitions of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information; and
wherein the method comprises updating one or more scene metadata in dependence on a content of the one or more scene update packets.
18. A method for providing a decoded audio representation on the basis of an encoded audio representation included in a bitstream, the bitstream comprising a plurality of packets of different packet types,
wherein the method comprises spatially rendering one or more audio signals;
wherein the method comprises receiving the plurality of packets of different packet types;
the packets comprising one or more scene configuration packets providing a renderer configuration information,
the packets comprising one or more scene update packets defining an update of scene metadata for the rendering and comprises a representation of one or more update conditions,
wherein the method comprises evaluating whether the one or more update conditions are fulfilled and selectively updating one or more scene metadata in dependence on a content of the one or more scene update packets if the one or more update conditions are fulfilled; and
wherein the content of the one or more scene update packets defines a change of one or more metadata values for the rendering.
42. A method for providing a encoded audio representation,
wherein the method comprises providing an information for a spatial rendering of one or more audio signal;
wherein the method comprises providing a plurality of packets of different packet types;
the packets comprising one or more scene configuration packets providing a renderer configuration information defining a usage of scene objects and/or a usage of scene characteristics,
the packets comprising one or more scene update packets defining a update of scene metadata for the rendering,
the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics.
19. A method for providing a encoded audio representation in a bitstream comprising a plurality of packets of different packet types,
wherein the method comprises providing a plurality of packets of different packet types;
the packets comprising one or more scene configuration packets providing a renderer configuration information,
the packets comprising one or more scene update packets define an update of scene metadata for the rendering and comprising a representation of one or more update conditions,
wherein a content of the one or more scene update packets defines a change of one or more metadata values for the rendering.
43. A non-transitory digital storage medium having stored there on a computer program for performing the method for providing a decoded audio representation according to claim 41 when the computer program is run by a computer.
20. A non-transitory digital storage medium having stored there on a computer program for performing the method for providing a decoded audio representation according to claim 18 when the computer program is run by a computer.
44. A non-transitory digital storage medium having stored there on a computer program for performing the method for providing a encoded audio representation according to claim 42 when the computer program is run by a computer.
21. A non-transitory digital storage medium having stored there on a computer program for performing the method for providing a encoded audio representation according to claim 19 when the computer program is run by a computer.
Claim 1 of copending Application No. 18/659927 does not teach definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics; wherein the audio decoder selects definitions of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information.
Mate teaches the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics (Fig. 1, Common 6DoF metadata; page 11, lines 11-12 – 6DoF metadata capable packets; page 25, lines 12-21 – acoustic properties and acoustic environment information; page 12, line 16 to page 13, line 10).
D2 teaches the audio decoder is configured to select (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) definition of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information (page 14, presets); wherein the audio decoder is configured to update one or more scene metadata in dependence on a content of the one or more scene update packets (page 14 – the user may change certain aspects of the rendered audio scene during playback, e.g., change the level or position of an audio object; page 15 – MHAS packet of type pactyp-userinteraction).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Mate and D2 into the teachings of Claim 1 of copending Application No. 18/659927 for the purpose of providing technical implementation of the transmission of the metadata updates associated with the update conditions.
Claim 17 of copending Application No. 18/659927 does not teach defining a usage of scene objects and/or a usage of scene characteristics; the packets comprising one or more scene payload packets comprising definitions of one or more scene objects and/or definitions of one or more scene characteristics.
Mate teaches the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics (Fig. 1, Common 6DoF metadata; page 11, lines 11-12 – 6DoF metadata capable packets; page 25, lines 12-21 – acoustic properties and acoustic environment information; page 12, line 16 to page 13, line 10).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Mate into the teachings of Claim 17 of copending Application No. 18/659927 for the purpose of providing technical implementation of the transmission of the metadata updates associated with the update conditions.
Claim 18 of copending Application No. 18/659927 does not teach defining a usage of scene objects and/or a usage of scene characteristics; selecting definitions of one or more scene objects and/or definitions of one or more scene characteristics which are included in the scene payload packets for the rendering in dependence on the renderer configuration information.
Mate teaches the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics (Fig. 1, Common 6DoF metadata; page 11, lines 11-12 – 6DoF metadata capable packets; page 25, lines 12-21 – acoustic properties and acoustic environment information; page 12, line 16 to page 13, line 10).
D2 teaches the audio decoder is configured to select (page 14 – Typical examples for such selected presets are: Dialogue Enhancement with increased dialogue signal level and attenuated background signal level and additional ambience object signal and a muted dialogue object that contains commentary) definition of one or more scene objects and/or definitions of one or more scene characteristics, which are included in the scene payload packets, for the rendering in dependence on the renderer configuration information (page 14, presets); wherein the audio decoder is configured to update one or more scene metadata in dependence on a content of the one or more scene update packets (page 14 – the user may change certain aspects of the rendered audio scene during playback, e.g., change the level or position of an audio object; page 15 – MHAS packet of type pactyp-userinteraction).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Mate and D2 into the teachings of Claim 18 of copending Application No. 18/659927 for the purpose of providing technical implementation of the transmission of the metadata updates associated with the update conditions.
Claim 19 of copending Application No. 18/659927 does not teach defining a usage of scene objects and/or a usage of scene characteristics; the packets comprising one or more scene payload packets comprising definitions of one or more scene objects and/or definitions of one or more scene characteristics.
Mate teaches the packets comprising one or more scene payload packets comprising definitions of one or more of the scene objects and/or definitions of one or more of the scene characteristics (Fig. 1, Common 6DoF metadata; page 11, lines 11-12 – 6DoF metadata capable packets; page 25, lines 12-21 – acoustic properties and acoustic environment information; page 12, line 16 to page 13, line 10).
It would have been obvious before the effective filing date of the claimed invention to incorporate the teachings of Mate into the teachings of Claim 19 of copending Application No. 18/659927 for the purpose of providing technical implementation of the transmission of the metadata updates associated with the update conditions.
Conclusion
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUYNH H NGUYEN whose telephone number is (571)272-7489. The examiner can normally be reached Monday-Thursday 7:30AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUYNH H NGUYEN/Primary Examiner, Art Unit 2693