Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to patent application as filed on 3/10/2024, which is a CON of PCT/CN2022/077142 filed 2/21/2022 which claims foreign priority to EP 21461589.0 filed 09/13/2021.
This action is made Non-Final.
Claims 1 – 15 are pending in the case. Claims 1 and 11 are independent claims.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/10/2024 and 12/11/2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings filed on 3/10/2024 have been accepted by the Examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 9-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tsukagoshi (USPUB 20190174151 A1 from IDS filed 12/11/2025).
Claim 1:
Tsukagoshi discloses A method for processing a video, comprising: receiving an input video; identifying one or more objects in the input video; determining a set of feature descriptors associated with the one or more objects; and generating a set of feature units associated with the one or more objects, wherein the set of feature units include Network Abstraction Layer (NAL) units (0055-56 and 0089: The encoder 103 receives input of video data VD, and codes the video data VD to obtain coded image data. Furthermore, the encoder 103 obtains information of an object on the basis of the video data VD. Then, the encoder 103 generates a video stream having coded image data and information of an object. The information of an object includes one-bit coded data obtained by coding one-bit data showing a shape of the object, information of a region that is a rectangular area enclosing the object, display priority information of the region, text information that explains the object, and the like... The encoder 103 includes an image encoding unit 131, the object recognition processing unit 132, a region encoding unit 133, a parameter set/SEI encoding unit 134, and a NAL packetizing unit 135. The image encoding unit 131 codes video data to obtain coded image data... one-bit coded data generated by the region encoding unit 133 is included as slice data in a NAL unit of a slice that is newly defined (refer to FIGS. 6 to 9). Information of an object (information of a region that is a rectangular area enclosing the object, display priority information of the region, and text information that explains the object) is included in a NAL unit of SEI).
Claim 2:
Tsukagoshi discloses generating a processed video independently from generating the set of feature units (0087-88: the encoder 103 generates a video stream having the coded image data and the information of an object...the information of an object includes one-bit coded data obtained by coding one-bit data showing a shape of the object, information of a region that is a rectangular area enclosing the object, display priority information of the region, text information that explains the object).
Claim 3:
Tsukagoshi discloses transmitting the processed video and the set of feature units in a joint bitstream (0087-88).
Claim 4:
Tsukagoshi discloses transmitting the processed video and the set of feature units separately (0055-56 and 0089: the processed video and the set of features are sent in two separate streams).
Claim 5:
Tsukagoshi discloses storing the processed video and the set of feature units separately (0087-90: the separate storage of the processed video (CPB) and the set of features (NAL units) is not explicitly mentioned however it is implicit in the discussion of both in the cited portions of Tsukagoshi).
Claim 6:
Tsukagoshi discloses the processed video includes encoded units generated based on the set of feature units (0087-88).
Claim 9:
Tsukagoshi discloses the set of feature units include a first number of feature units; the processed video include a second number of video units; and the first number is different from the second number (0074: a predetermined number of NAL units of coded image data constituting each picture include a NAL unit of a conventionally well-known slice having coded image data generated by the image encoding unit 131 as slice data, as well as a NAL unit of a slice that is newly defined having one-bit coded data generated by the region encoding unit 133 as slice data. Furthermore, a predetermined number of the NAL units include a NAL unit of SEI that is newly defined having information of an object).
Claim 10:
Tsukagoshi discloses the first number is smaller than the second number (0074: a predetermined number of NAL units of coded image data constituting each picture include a NAL unit of a conventionally well-known slice having coded image data generated by the image encoding unit 131 as slice data, as well as a NAL unit of a slice that is newly defined having one-bit coded data generated by the region encoding unit 133 as slice data. Furthermore, a predetermined number of the NAL units include a NAL unit of SEI that is newly defined having information of an object).
Claim 11:
Tsukagoshi discloses A system for processing a video, comprising: a transmitter configured to: receive an input video; identify one or more objects in the input video; determine a set of feature descriptors associated with the one or more objects; generate a set of feature units associated with the one or more objects; and generate a processed video independently from generating the set of feature units (0055-56 and 0087-89: The encoder 103 receives input of video data VD, and codes the video data VD to obtain coded image data. Furthermore, the encoder 103 obtains information of an object on the basis of the video data VD. Then, the encoder 103 generates a video stream having coded image data and information of an object. The information of an object includes one-bit coded data obtained by coding one-bit data showing a shape of the object, information of a region that is a rectangular area enclosing the object, display priority information of the region, text information that explains the object, and the like... The encoder 103 includes an image encoding unit 131, the object recognition processing unit 132, a region encoding unit 133, a parameter set/SEI encoding unit 134, and a NAL packetizing unit 135. The image encoding unit 131 codes video data to obtain coded image data... the encoder 103 generates a video stream having the coded image data and the information of an object...the information of an object includes one-bit coded data obtained by coding one-bit data showing a shape of the object, information of a region that is a rectangular area enclosing the object, display priority information of the region, text information that explains the object).one-bit coded data generated by the region encoding unit 133 is included as slice data in a NAL unit of a slice that is newly defined (refer to FIGS. 6 to 9). Information of an object (information of a region that is a rectangular area enclosing the object, display priority information of the region, and text information that explains the object) is included in a NAL unit of SEI).
Claim 12:
Tsukagoshi discloses the transmitter is further configured to: transmit the processed video and the set of feature units in a joint bitstream (0087-88).
Claim 13:
Tsukagoshi discloses transmit the processed video and the set of feature units separately (0055-56 and 0089: the processed video and the set of features are sent in two separate streams).
Claim 14:
Tsukagoshi discloses wherein the set of feature units include Network Abstraction Layer (NAL) units (0056 and 0089: The encoder 103 includes an image encoding unit 131, the object recognition processing unit 132, a region encoding unit 133, a parameter set/SEI encoding unit 134, and a NAL packetizing unit 135. The image encoding unit 131 codes video data to obtain coded image data... one-bit coded data generated by the region encoding unit 133 is included as slice data in a NAL unit of a slice that is newly defined (refer to FIGS. 6 to 9)).
Claim 15:
Tsukagoshi discloses the set of feature units include a first number of feature units; the processed video include a second number of video units; and the first number is smaller than the second number (0074: a predetermined number of NAL units of coded image data constituting each picture include a NAL unit of a conventionally well-known slice having coded image data generated by the image encoding unit 131 as slice data, as well as a NAL unit of a slice that is newly defined having one-bit coded data generated by the region encoding unit 133 as slice data. Furthermore, a predetermined number of the NAL units include a NAL unit of SEI that is newly defined having information of an object).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsukagoshi in view of Kim (USPUB 20230085554 A1 from IDS filed 12/11/2025).
Claim 7:
Tsukagoshi discloses every feature of claim 1.
Tsukagoshi, by itself, does not seem to completely teach the NAL units include Visual Component Library (VCL) units.
The Examiner maintains that these features were previously well-known as taught by Kim.
Kim teaches the NAL units include Visual Component Library (VCL) units (0199: the NAL unit may be classified into a VCL NAL unit and a non-VCL NAL unit according to the RBSP generated in the VCL).
McCurdy and Fukui are analogous art because they are from the same problem-solving area, panning (scrolling) through digital content displayed on a screen.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Tsukagoshi and Kim before him or her, to combine the teachings of Tsukagoshi and Kim. The rationale for doing so would have been to utilize various well- known video coding standards.
Therefore, it would have been obvious to combine Tsukagoshi and Kim to obtain the invention as specified in the instant claim(s).
Claim 8:
Tsukagoshi, by itself, does not seem to completely teach the NAL units include non-VCL units.
The Examiner maintains that these features were previously well-known as taught by Kim.
Kim teaches the NAL units include non-VCL units (0199: the NAL unit may be classified into a VCL NAL unit and a non-VCL NAL unit according to the RBSP generated in the VCL).
McCurdy and Fukui are analogous art because they are from the same problem-solving area, panning (scrolling) through digital content displayed on a screen.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Tsukagoshi and Kim before him or her, to combine the teachings of Tsukagoshi and Kim. The rationale for doing so would have been to utilize various well- known video coding standards.
Therefore, it would have been obvious to combine Tsukagoshi and Kim to obtain the invention as specified in the instant claim(s).
Note
The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED H ZUBERI/Primary Examiner, Art Unit 2178