DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
2. Claims 14-20 are withdrawn from further consideration pursuant to 37 CFR
1.142(b), as being drawn to a nonelected invention, there being no allowable generic
or linking claim. Applicant timely traversed the restriction (election) requirement in the
reply filed on 11/24/2025.
3. Applicant's election with traverse of the restriction in the reply filed on 11/24/2025
is acknowledged. The traverse is on the ground(s) that there is no serious burden on
the examiner for examining all species. This is not found persuasive because:
a) The species are independent or distinct because each of the species I to II
has different structure and operational mode from each other.
b) The examiner has conducted completed search for the elected species and
found good references directed to elected species. The prior art found/used for
rejecting the elected species cannot be used to reject the non-elected species. The
examiner MUST perform further search to determine whether there is other prior art
directed to the non-elected species. Therefore, from an actual search and not based on
believes, the examiner has proved and found it as a matter of fact that it is impossible to
perform a simultaneous search for all species and it will cause serous burden on the
examiner if the examiner would have to examine all of the species.
The requirement is still deemed proper and is therefore made FINAL.
Claim Rejections - 35 USC § 101
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
5. Claim(s) 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) a method, “an image partition step which partitions an image to obtain a first object region, a region scaling step which scales the first object region based on a scaling factor of the first object region to obtain a second object region, a region merging step which merges the second object region with at least one of an object region different from the second object region or a non-object region to obtain a merged image, and an image reconstruction step which reconstructs the merged image”.
This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because [the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory)].
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g., an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1 and 15 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories?
YES. Claim(s) 1 - 13 are directed to a method.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
YES, the claims are directed toward a mental process (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts — mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity — fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes — concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The method in claims 1-13 comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea.
Regarding Claim 1: the claim recites the steps (functions) of:
an image partition step which partitions an image to obtain a first object region (as drafted, this is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind or the use of a pen and paper to partition an image);
a region scaling step which scales the first object region based on a scaling factor of the first object region to obtain a second object region; (as drafted, this is a process that under its broadest reasonable interpretation, covers performance of the limitation in mathematical calculations to scale the object region to obtain a second object region);
a region merging step which merges the second object region with at least one of an object region different from the second object region or a non-object region to obtain a merged image (as drafted, this is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind and using a pen and paper to combine or obtain a merged image); and
an image reconstruction step which reconstructs the merged image (as drafted, this is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind and using a pen and paper to reconstruct the merged image).
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("[MJental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
The mere nominal recitation that the various steps are being executed by a device/in a device (e.g., processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
No, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claims 1, does not recite any of the exemplary considerations that is indicative of an abstract idea having been integrated into a practical application.
These limitations are recited at a high level of generality (i.e., as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that
amount to significantly more than the judicial exception?
NO, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre- guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claim 1 does not recite any additional elements that are not well- understood, routine or conventional. The use of a computer to perform decoding, partitioning, scaling, merging, and reconstructing, as claimed in Claim 1 is a routine, well-understood and conventional process that is performed by computers.
Regarding claim 2: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein partitioning of the image is partitioning the image into at least one object region and at least one non-object region (mental process including observation and evaluation, and can be done mentally in the human mind).
Regarding claim 3: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the scaling factor of the first object region is determined as a minimum value or a maximum value of the scaling factor capable of object search among the plurality of scaling factor candidates of the first object region (mathematical calculations, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 4: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the scaling factor of the first object region is determined as a minimum value or a maximum value of the scaling factor capable of object search among the plurality of scaling factor candidates of the first object region (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 5: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the scaling factor of the first object region is determined based on an attribute of the first object region (mathematical calculation and mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 6: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the second object region includes a region obtained by inversely scaling a region obtained by scaling the first object region based on the scaling factor of the first object region (mathematical calculations and mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 7: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the merged image includes a hole which is not the object region or the non- object region (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 8: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the hole is filled with an average value, a median value, a maximum value, a minimum value or a mode value of samples belonging to the image or the second object region (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 9: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the region merging step further includes padding a neighboring region of the non-object region of the image with a predetermined sample value (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 10: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the image reconstruction step is performed based on a quantization parameter, and wherein the quantization parameter is determined as a maximum value of a quantization parameter capable of object search among a plurality of quantization parameter candidates (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 11: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein the quantization parameter is redetermined based on a comparison result between the quantization parameter and a reference quantization parameter which is pre-defined in an image decoding device (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 12: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein redetermining the quantization parameter is redetermining the quantization parameter as a same value as the reference quantization parameter based on the comparison result (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Regarding claim 13: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation(s): wherein redetermining the quantization parameter is redetermining the quantization parameter as a value obtained by adding or subtracting a predetermined constant value to or from the reference quantization parameter based on the comparison result (mental process including observation and evaluation, and can be done mentally in the human mind or using generic computers or components configured to perform the method).
Thus, since Claim(s) 1-13 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1-13 are not eligible subject matter under 35 U.S.C 101.
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Joshi et al. U.S. Patent Application (US 2021/0409768) (hereinafter Joshi) in view of WANG et al. U.S. Patent Application (US 2021/0176476) (hereinafter WANG).
Regarding claim 1, Joshi et al. (US 2021/0409768) discloses an object-based image decoding method (atlas frames all use tiles with proportional sizes that are suitable for object based or partial decoding; paragraph 54), the method comprising:
an image partition step which partitions an image to obtain a first object region (An atlas video frame may be divided into tile-partitions and one or more of the tile-partitions may be combined into tiles; paragraph 148, Figure 6);
a region scaling step which scales the first object region based on a scaling factor of the first object region to obtain a second object region (decoder 550 can also determine that when the first video frame is scaled, each scaled video tile, of the video tiles included in the scaled first video frame, represents a similar area on the first atlas frame as the corresponding atlas tile; paragraphs 205-210, Figure 6);
a region merging step which merges the second object region with at least one of -an object region different from the second object region (An atlas video frame may be divided into tile-partitions. Therefore, one or more of the tile-partitions may be combined into tiles; paragraphs 148-149, Figure 6 illustrating the first video frame is scaled to each scaled video tile, of the video tiles included in the scaled first video frame, represents an area on the first atlas frame to the corresponding atlas tile); and
an image reconstruction step which reconstructs the merged image (Processing is configured to reconstruct a portion of the point cloud based on the portion of the video frames and the portion of the atlas frames; paragraph 214, Figure 11).
Joshi does not explicitly disclose a region merging step which merges the second object region with at least one of -an object region different from a non-object region to obtain a merged image.
However, Wang working in the same field of endeavor teaches a region merging step which merges the second object region with one of an object region different from a non-object region to obtain a merged image (Each frame may be decoded according to a quantization parameter (QP) corresponding to each ROI macroblock and the QP corresponding to the non-ROI macroblock (considered as non-object region), to obtain a reconstructed frame of each frame; paragraph 100). Such an arrangement provides user, further improving decoding efficiency.
Thus, it would have been obvious to one having ordinary skill in the art at the time of Applicant’s invention to have combined the system of Joshi as taught by WANG, since doing so would have predictably and advantageously provided user, therefore improving decoding efficiency.
Regarding claim 2, Joshi discloses the method of claim 1.
Joshi does not explicitly disclose wherein partitioning of the image is partitioning the image into at least one object region and at least one non-object region.
However, Wang working in the same field of endeavor teaches wherein partitioning of the image is partitioning the image into at least one object region and at least one non-object region (Each frame may be decoded according to a quantization parameter (QP) corresponding to each ROI macroblock and the QP corresponding to the non-ROI macroblock (considered as non-object region), to obtain a reconstructed frame of each frame; paragraph 100). Such an arrangement provides user with an improving decoding efficiency.
Thus, it would have been obvious to one having ordinary skill in the art at the time of Applicant’s invention to have combined the system of Joshi as taught by WANG, since doing so would have predictably and advantageously provided user improving decoding efficiency.
Regarding claim 3, Joshi discloses the method of claim 1,
wherein the scaling factor of the first object region is determined as any one of a plurality of scaling factor candidates of the first object region (the atlas frame is with tile partitions. The geometry frame is subsampled in both the X and Y directions by a factor, such that the geometry frame is with tiles; paragraphs 205-210).
Regarding claim 4, Joshi discloses the method of claim 3,
wherein the scaling factor of the first object region is determined as a minimum value or a maximum value of the scaling factor capable of object search among the plurality of scaling factor candidates of the first object region (With the value of the syntax element allows the decoder 550 to calculate the upper-bound on the number samples that the decoder 550 needs to decode for each video sub-bitstream; paragraphs 198, 208-211).
Regarding claim 5, Joshi discloses the method of claim 1,
wherein the scaling factor of the first object region is determined based on an attribute of the first object region (Syntax value indicate that the attribute, geometry, occupancy map and atlas frames all use tiles with proportional sizes that are suitable for object based or partial decoding; at least paragraphs 54, 84, 119-129).
Regarding claim 6, Joshi discloses the method of claim 1,
wherein the second object region includes a region obtained by inversely scaling a region obtained by scaling the first object region based on the scaling factor of the first object region (the more tiles that are included in a frame can decrease the compression efficiency. However, as the number of tiles can increase a decoder (such as the decoder 550) can select tiles to decode; paragraph 189).
Regarding claim 7, Joshi discloses the method of claim 1,
wherein the merged image includes a hole which is not the object region or the non- object region (atlas frame is 1024 (H)×1024 (W) in size and divided into 16 tile-partitions of size 256×256. As illustrated the atlas frame includes three tiles, wherein the atlas frame covering a plurality of square holes without the object region; paragraph 149, Figure 6).
Regarding claim 8, Joshi discloses the method of claim 7,
wherein the hole is filled with an average value, a median value, a maximum value, a minimum value or a mode value of samples belonging to the image or the second object region (Calculating the upper-bound on the number samples that the decoder 550 needs to decode for each video sub-bitstream. For example, the first value or the second value, for each atlas tile (tile including a plurality of non-object region); at least paragraph 198, Figures 4C-4D, 6).
Regarding claim 9, Joshi discloses the method of claim 1,
wherein the region merging step further includes padding a neighboring region of the non-object region of the image with a predetermined sample value (Parameter sets and messages 536a can be used to define objects, track the objects, specify where the objects are positioned with respect to the 2D frame, and to associate the objects with atlas tiles and patches; paragraph 148, Figures 4C-4D, 6).
Regarding claim 10, Joshi discloses the method of claim 1,
wherein the image reconstruction step is performed based on a quantization parameter, and wherein the quantization parameter is determined as a maximum value of a quantization parameter capable of object search among a plurality of quantization parameter candidates (methods for using tiles in the attribute frames, geometry frames, and occupancy map frames (video frames) and methods for relating the tiles of the video frames to the tiles of the atlas frames. By relating the tiles of the video frames to the tiles of the atlas frames, the decoder 550 can decode an object of interest from certain tiles from the video frames and the atlas frames; paragraphs 84, 150).
Regarding claim 11, Joshi discloses the method of claim 10,
wherein the quantization parameter is redetermined based on a comparison result between the quantization parameter and a reference quantization parameter which is pre-defined in an image decoding device (Relationships between sizes of the video tiles and sizes of the atlas tiles, one or more flags, one or more additional syntax elements, one or more quantization parameter size, one or more thresholds, geometry smoothing parameters, attribute smoothing parameters, or any combination thereof. The smoothing parameters can be utilized by the decoder 550 for improving the visual quality of the reconstructed point cloud; paragraphs 84, 133).
Regarding claim 12, Joshi discloses the method of claim 11.
Joshi does not explicitly disclose wherein redetermining the quantization parameter is redetermining the quantization parameter as a same value as the reference quantization parameter based on the comparison result.
However, Wang working in the same field of endeavor teaches wherein redetermining the quantization parameter is redetermining the quantization parameter as a same value as the reference quantization parameter based on the comparison result (Allocate different QPs according to different priorities of ranges of interest. (For example, set a QP of an ROI with a highest priority to be most precise (that is, smallest), set a QP of a range with a second highest priority to be second smallest, and set a QP of a background to be largest). A value of a quantization parameter corresponding to each priority may be pre-defined; paragraph 38).
Thus, it would have been obvious to one having ordinary skill in the art at the time of Applicant’s invention to have combined the system of Joshi as taught by WANG, since doing so would have predictably and advantageously provided user improves accuracy of obtaining location information and type information of an ROI macroblock by a decoder end, and helps improve decoding efficiency of obtaining location information and type information of an ROI macroblock (paragraph 12 of WANG).
Regarding claim 13, Joshi discloses the method of claim 11.
Joshi does not explicitly disclose wherein redetermining the quantization parameter is redetermining the quantization parameter as a value obtained by adding or subtracting a predetermined constant value to or from the reference quantization parameter based on the comparison result.
However, Wang working in the same field of endeavor teaches wherein redetermining the quantization parameter is redetermining the quantization parameter as a value obtained by adding or subtracting a predetermined constant value to or from the reference quantization parameter based on the comparison result (when a quantization parameter is selected for the corresponding macroblock, a quantization parameter corresponding to a range with a highest priority in the macroblock may be selected. That is, if a macroblock includes a pixel of a range with a highest priority, a quantization parameter of the macroblock is set as a quantization parameter with a highest priority; paragraph 39).
Thus, it would have been obvious to one having ordinary skill in the art at the time of Applicant’s invention to have combined the system of Joshi as taught by WANG, since doing so would have predictably and advantageously provided user improves accuracy of obtaining location information and type information of an ROI macroblock by a decoder end, and helps improve decoding efficiency of obtaining location information and type information of an ROI macroblock (paragraph 12 of WANG).
Cited Art
9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Malakhov et al. (US 2021/0168408) discloses an apparatus comprising: a memory configured to store computer-executable instructions; and a processor coupled to the memory, wherein the computer-executable instructions cause the processor to be configured to: determine motion information for a first sample in a video image; input the motion information and the first sample into a machine-learning-based model to obtain a first output comprising a map indicating a region of interest (ROI) and a region of non-interest (RONI); determine a coding parameter based on the first output; and encode the first sample by applying the coding parameter, wherein the computer-executable instructions further cause the processor to be configured to: divide the video image into a plurality of coding tree units, wherein each of the coding tree units has a first size; hierarchically split a first coding tree unit of the coding tree units into a plurality of first coding units; determine a second motion vector for each of the first coding units; and input the first coding tree unit and a plurality of the second motion vectors of the first coding units to the machine-learning-based model.
LIM et al. (US 2019/0075293) discloses a decoding method, comprising: inverse-quantizing one or more first transform coefficients quantized; deriving a quantization-related parameter based on the one or more first transform coefficients inverse-quantized; and inverse-quantizing a second transform coefficient quantized, based on the derived quantization-related parameter, wherein the one or more first transform coefficients and the second transform coefficient belong to a same image block, further comprising: transforming a prediction signal of a current block to be decoded into one or more third transform coefficients, wherein in the deriving of a quantization-related parameter, the quantization-related parameter is derived based on the one or more first transform coefficients inverse-quantized and the one or more third transform coefficients transformed from the prediction signal of the current block to be decoded.
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN H NGUYEN whose telephone number is (571)270-1229. The examiner can normally be reached M-F 7 am-4 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDERRAHIM MEROUAN can be reached at (571) 270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALLEN H NGUYEN/ Primary Examiner, Art Unit 2683